Feb 02 17:14:56 crc systemd[1]: Starting Kubernetes Kubelet... Feb 02 17:14:56 crc restorecon[4679]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 02 17:14:56 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 17:14:57 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 17:14:58 crc restorecon[4679]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 17:14:58 crc restorecon[4679]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 02 17:15:00 crc kubenswrapper[4858]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 17:15:00 crc kubenswrapper[4858]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 02 17:15:00 crc kubenswrapper[4858]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 17:15:00 crc kubenswrapper[4858]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 17:15:00 crc kubenswrapper[4858]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 02 17:15:00 crc kubenswrapper[4858]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.153283 4858 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158498 4858 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158529 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158539 4858 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158548 4858 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158558 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158566 4858 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158574 4858 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158581 4858 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158590 4858 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158599 4858 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158606 4858 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158614 4858 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158623 4858 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158630 4858 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158638 4858 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158645 4858 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158653 4858 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158660 4858 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158668 4858 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158676 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158684 4858 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158691 4858 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158698 4858 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158706 4858 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158720 4858 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158728 4858 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158736 4858 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158746 4858 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158756 4858 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158764 4858 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158772 4858 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158779 4858 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158786 4858 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158794 4858 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158801 4858 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158809 4858 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158819 4858 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158830 4858 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158838 4858 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158847 4858 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158855 4858 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158863 4858 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158871 4858 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158882 4858 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158889 4858 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158900 4858 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158911 4858 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158922 4858 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158931 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158940 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158948 4858 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158957 4858 feature_gate.go:330] unrecognized feature gate: Example Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158964 4858 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.158994 4858 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.159003 4858 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.159011 4858 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.159019 4858 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.159026 4858 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.159036 4858 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.159043 4858 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.159051 4858 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.159058 4858 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.159066 4858 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.159073 4858 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.159081 4858 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.159088 4858 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.159096 4858 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.159104 4858 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.159112 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.159119 4858 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.159126 4858 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160090 4858 flags.go:64] FLAG: --address="0.0.0.0" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160113 4858 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160127 4858 flags.go:64] FLAG: --anonymous-auth="true" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160138 4858 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160149 4858 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160158 4858 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160177 4858 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160188 4858 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160198 4858 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160208 4858 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160217 4858 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160227 4858 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160236 4858 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160245 4858 flags.go:64] FLAG: --cgroup-root="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160254 4858 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160263 4858 flags.go:64] FLAG: --client-ca-file="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160271 4858 flags.go:64] FLAG: --cloud-config="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160279 4858 flags.go:64] FLAG: --cloud-provider="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160288 4858 flags.go:64] FLAG: --cluster-dns="[]" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160302 4858 flags.go:64] FLAG: --cluster-domain="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160310 4858 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160320 4858 flags.go:64] FLAG: --config-dir="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160329 4858 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160338 4858 flags.go:64] FLAG: --container-log-max-files="5" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160358 4858 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160366 4858 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160375 4858 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160385 4858 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160394 4858 flags.go:64] FLAG: --contention-profiling="false" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160403 4858 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160411 4858 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160420 4858 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160429 4858 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160439 4858 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160448 4858 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160457 4858 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160465 4858 flags.go:64] FLAG: --enable-load-reader="false" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160475 4858 flags.go:64] FLAG: --enable-server="true" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160483 4858 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160494 4858 flags.go:64] FLAG: --event-burst="100" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160504 4858 flags.go:64] FLAG: --event-qps="50" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160513 4858 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160522 4858 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160531 4858 flags.go:64] FLAG: --eviction-hard="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160541 4858 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160550 4858 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160559 4858 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160568 4858 flags.go:64] FLAG: --eviction-soft="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160577 4858 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160586 4858 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160594 4858 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160603 4858 flags.go:64] FLAG: --experimental-mounter-path="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160612 4858 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160621 4858 flags.go:64] FLAG: --fail-swap-on="true" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160630 4858 flags.go:64] FLAG: --feature-gates="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160640 4858 flags.go:64] FLAG: --file-check-frequency="20s" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160649 4858 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160659 4858 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160668 4858 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160677 4858 flags.go:64] FLAG: --healthz-port="10248" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160686 4858 flags.go:64] FLAG: --help="false" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160695 4858 flags.go:64] FLAG: --hostname-override="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160703 4858 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160713 4858 flags.go:64] FLAG: --http-check-frequency="20s" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160721 4858 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160730 4858 flags.go:64] FLAG: --image-credential-provider-config="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160738 4858 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160747 4858 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160756 4858 flags.go:64] FLAG: --image-service-endpoint="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160764 4858 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160773 4858 flags.go:64] FLAG: --kube-api-burst="100" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160782 4858 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160791 4858 flags.go:64] FLAG: --kube-api-qps="50" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160800 4858 flags.go:64] FLAG: --kube-reserved="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160810 4858 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160818 4858 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160828 4858 flags.go:64] FLAG: --kubelet-cgroups="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160837 4858 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160845 4858 flags.go:64] FLAG: --lock-file="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160854 4858 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160863 4858 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160872 4858 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160885 4858 flags.go:64] FLAG: --log-json-split-stream="false" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160893 4858 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160903 4858 flags.go:64] FLAG: --log-text-split-stream="false" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160912 4858 flags.go:64] FLAG: --logging-format="text" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160921 4858 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160931 4858 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160939 4858 flags.go:64] FLAG: --manifest-url="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160948 4858 flags.go:64] FLAG: --manifest-url-header="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160960 4858 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.160969 4858 flags.go:64] FLAG: --max-open-files="1000000" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161007 4858 flags.go:64] FLAG: --max-pods="110" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161016 4858 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161026 4858 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161035 4858 flags.go:64] FLAG: --memory-manager-policy="None" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161044 4858 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161053 4858 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161062 4858 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161071 4858 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161090 4858 flags.go:64] FLAG: --node-status-max-images="50" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161099 4858 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161108 4858 flags.go:64] FLAG: --oom-score-adj="-999" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161117 4858 flags.go:64] FLAG: --pod-cidr="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161125 4858 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161140 4858 flags.go:64] FLAG: --pod-manifest-path="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161148 4858 flags.go:64] FLAG: --pod-max-pids="-1" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161157 4858 flags.go:64] FLAG: --pods-per-core="0" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161166 4858 flags.go:64] FLAG: --port="10250" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161175 4858 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161184 4858 flags.go:64] FLAG: --provider-id="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161193 4858 flags.go:64] FLAG: --qos-reserved="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161204 4858 flags.go:64] FLAG: --read-only-port="10255" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161213 4858 flags.go:64] FLAG: --register-node="true" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161222 4858 flags.go:64] FLAG: --register-schedulable="true" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161232 4858 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161246 4858 flags.go:64] FLAG: --registry-burst="10" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161254 4858 flags.go:64] FLAG: --registry-qps="5" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161268 4858 flags.go:64] FLAG: --reserved-cpus="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161279 4858 flags.go:64] FLAG: --reserved-memory="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161309 4858 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161323 4858 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161335 4858 flags.go:64] FLAG: --rotate-certificates="false" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161347 4858 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161358 4858 flags.go:64] FLAG: --runonce="false" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161369 4858 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161381 4858 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161393 4858 flags.go:64] FLAG: --seccomp-default="false" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161402 4858 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161411 4858 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161421 4858 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161430 4858 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161475 4858 flags.go:64] FLAG: --storage-driver-password="root" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161484 4858 flags.go:64] FLAG: --storage-driver-secure="false" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161493 4858 flags.go:64] FLAG: --storage-driver-table="stats" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161502 4858 flags.go:64] FLAG: --storage-driver-user="root" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161511 4858 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161520 4858 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161529 4858 flags.go:64] FLAG: --system-cgroups="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161538 4858 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161553 4858 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161564 4858 flags.go:64] FLAG: --tls-cert-file="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161574 4858 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161590 4858 flags.go:64] FLAG: --tls-min-version="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161600 4858 flags.go:64] FLAG: --tls-private-key-file="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161610 4858 flags.go:64] FLAG: --topology-manager-policy="none" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161622 4858 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161631 4858 flags.go:64] FLAG: --topology-manager-scope="container" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161642 4858 flags.go:64] FLAG: --v="2" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161655 4858 flags.go:64] FLAG: --version="false" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161667 4858 flags.go:64] FLAG: --vmodule="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161678 4858 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.161690 4858 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.161934 4858 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.161947 4858 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.161957 4858 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.161966 4858 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162008 4858 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162026 4858 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162036 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162045 4858 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162053 4858 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162062 4858 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162072 4858 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162080 4858 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162088 4858 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162095 4858 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162104 4858 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162112 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162121 4858 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162129 4858 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162137 4858 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162145 4858 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162153 4858 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162162 4858 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162170 4858 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162179 4858 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162190 4858 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162200 4858 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162208 4858 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162217 4858 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162226 4858 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162233 4858 feature_gate.go:330] unrecognized feature gate: Example Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162241 4858 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162253 4858 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162263 4858 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162273 4858 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162282 4858 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162292 4858 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162303 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162314 4858 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162324 4858 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162333 4858 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162344 4858 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162353 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162363 4858 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162372 4858 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162381 4858 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162390 4858 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162399 4858 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162406 4858 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162417 4858 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162427 4858 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162436 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162444 4858 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162452 4858 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162459 4858 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162467 4858 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162477 4858 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162487 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162496 4858 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162506 4858 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162514 4858 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162523 4858 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162531 4858 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162539 4858 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162547 4858 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162555 4858 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162562 4858 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162569 4858 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162579 4858 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162586 4858 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162594 4858 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.162601 4858 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.162626 4858 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.177364 4858 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.177405 4858 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177525 4858 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177538 4858 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177547 4858 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177556 4858 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177564 4858 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177573 4858 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177582 4858 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177592 4858 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177603 4858 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177613 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177622 4858 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177632 4858 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177641 4858 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177650 4858 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177658 4858 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177666 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177675 4858 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177683 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177691 4858 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177699 4858 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177707 4858 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177714 4858 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177724 4858 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177732 4858 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177739 4858 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177747 4858 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177754 4858 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177762 4858 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177770 4858 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177777 4858 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177785 4858 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177793 4858 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177801 4858 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177808 4858 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177815 4858 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177823 4858 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177830 4858 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177837 4858 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177845 4858 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177853 4858 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177862 4858 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177869 4858 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177877 4858 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177885 4858 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177894 4858 feature_gate.go:330] unrecognized feature gate: Example Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177901 4858 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177909 4858 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177917 4858 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177925 4858 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177933 4858 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177941 4858 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177948 4858 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177956 4858 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.177967 4858 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178000 4858 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178009 4858 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178019 4858 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178028 4858 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178051 4858 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178059 4858 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178067 4858 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178077 4858 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178086 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178095 4858 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178105 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178113 4858 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178121 4858 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178128 4858 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178136 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178144 4858 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178151 4858 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.178164 4858 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178397 4858 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178409 4858 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178420 4858 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178430 4858 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178438 4858 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178447 4858 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178454 4858 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178462 4858 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178470 4858 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178477 4858 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178484 4858 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178492 4858 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178500 4858 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178508 4858 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178515 4858 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178522 4858 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178530 4858 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178540 4858 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178549 4858 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178557 4858 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178564 4858 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178572 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178587 4858 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178594 4858 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178602 4858 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178609 4858 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178616 4858 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178624 4858 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178632 4858 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178639 4858 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178647 4858 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178654 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178664 4858 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178672 4858 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178680 4858 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178687 4858 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178695 4858 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178704 4858 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178712 4858 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178720 4858 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178727 4858 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178734 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178742 4858 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178750 4858 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178757 4858 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178765 4858 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178772 4858 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178780 4858 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178787 4858 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178797 4858 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178807 4858 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178815 4858 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178823 4858 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178832 4858 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178841 4858 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178849 4858 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178857 4858 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178864 4858 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178873 4858 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178881 4858 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178889 4858 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178896 4858 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178904 4858 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178914 4858 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178924 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178933 4858 feature_gate.go:330] unrecognized feature gate: Example Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178943 4858 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178951 4858 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178959 4858 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178968 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.178999 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.179011 4858 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.179876 4858 server.go:940] "Client rotation is on, will bootstrap in background" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.185666 4858 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.185791 4858 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.188116 4858 server.go:997] "Starting client certificate rotation" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.188164 4858 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.188405 4858 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-27 07:02:46.493751619 +0000 UTC Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.188562 4858 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.214388 4858 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.216434 4858 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 17:15:00 crc kubenswrapper[4858]: E0202 17:15:00.218316 4858 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.231401 4858 log.go:25] "Validated CRI v1 runtime API" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.272222 4858 log.go:25] "Validated CRI v1 image API" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.274578 4858 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.280548 4858 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-02-17-10-31-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.280599 4858 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.299217 4858 manager.go:217] Machine: {Timestamp:2026-02-02 17:15:00.297140793 +0000 UTC m=+1.449556098 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:152e49e6-c863-4d14-b212-d4d9f0b62e1a BootID:b1513e64-30b8-48d2-874a-29d4cc9d3b3d Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:87:fe:d7 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:87:fe:d7 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:f5:3c:e6 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:65:ba:a4 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:d0:22:05 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:ac:f6:07 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:4e:9f:92:57:26:2f Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:a6:dd:be:33:ec:ce Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.299491 4858 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.299640 4858 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.300157 4858 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.300350 4858 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.300387 4858 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.300622 4858 topology_manager.go:138] "Creating topology manager with none policy" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.300634 4858 container_manager_linux.go:303] "Creating device plugin manager" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.301131 4858 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.301713 4858 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.301915 4858 state_mem.go:36] "Initialized new in-memory state store" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.302070 4858 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.305857 4858 kubelet.go:418] "Attempting to sync node with API server" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.305885 4858 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.305902 4858 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.305917 4858 kubelet.go:324] "Adding apiserver pod source" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.305931 4858 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.310491 4858 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.313167 4858 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.314275 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Feb 02 17:15:00 crc kubenswrapper[4858]: E0202 17:15:00.314391 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.314407 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Feb 02 17:15:00 crc kubenswrapper[4858]: E0202 17:15:00.314487 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.315687 4858 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.317440 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.317476 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.317491 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.317502 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.317517 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.317526 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.317536 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.317551 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.317572 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.317582 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.317599 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.317611 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.320440 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.320906 4858 server.go:1280] "Started kubelet" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.321048 4858 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.321408 4858 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.322091 4858 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.322336 4858 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 02 17:15:00 crc systemd[1]: Started Kubernetes Kubelet. Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.330533 4858 server.go:460] "Adding debug handlers to kubelet server" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.334341 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.334404 4858 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 02 17:15:00 crc kubenswrapper[4858]: E0202 17:15:00.335414 4858 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.335670 4858 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.335709 4858 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.335709 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 01:44:10.591128607 +0000 UTC Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.335921 4858 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.336887 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Feb 02 17:15:00 crc kubenswrapper[4858]: E0202 17:15:00.337018 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.338581 4858 factory.go:55] Registering systemd factory Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.338648 4858 factory.go:221] Registration of the systemd container factory successfully Feb 02 17:15:00 crc kubenswrapper[4858]: E0202 17:15:00.341324 4858 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.13:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18907d5ef36aedb9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 17:15:00.320873913 +0000 UTC m=+1.473289198,LastTimestamp:2026-02-02 17:15:00.320873913 +0000 UTC m=+1.473289198,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 17:15:00 crc kubenswrapper[4858]: E0202 17:15:00.342818 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" interval="200ms" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.342997 4858 factory.go:153] Registering CRI-O factory Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.343047 4858 factory.go:221] Registration of the crio container factory successfully Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.343172 4858 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.343217 4858 factory.go:103] Registering Raw factory Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.343244 4858 manager.go:1196] Started watching for new ooms in manager Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.344594 4858 manager.go:319] Starting recovery of all containers Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.360472 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.360546 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.360571 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.360591 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.360611 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.360638 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.360663 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.360690 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.360717 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.360736 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.360756 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.360776 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.360795 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.360819 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.360837 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.360871 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.360890 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.360910 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.360928 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.360946 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.360965 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.361025 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.361070 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.361100 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.361130 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.361156 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.361220 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.361253 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.361279 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.361303 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.361326 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.361352 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.361378 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.361404 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.361432 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.361458 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.361486 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.361517 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.363608 4858 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.363690 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.363714 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.363734 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.363751 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.363822 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.363838 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.363853 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.363868 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.363883 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.363899 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.363928 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.363945 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.363959 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364077 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364103 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364123 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364138 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364163 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364178 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364192 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364209 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364223 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364239 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364254 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364269 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364284 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364299 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364312 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364326 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364341 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364354 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364370 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364387 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364407 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364425 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364443 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364461 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364480 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364496 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364515 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364534 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364551 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364573 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364592 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364609 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364629 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364642 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364654 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364669 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364685 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364699 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364713 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364726 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364743 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364756 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364772 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364789 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364807 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364824 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364842 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364856 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364872 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364891 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364909 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364927 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364946 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.364996 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365013 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365029 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365046 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365060 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365075 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365091 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365105 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365120 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365145 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365162 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365176 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365190 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365203 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365217 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365230 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365246 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365261 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365292 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365306 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365319 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365333 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365349 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365369 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365387 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365405 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365421 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365435 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365450 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365464 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365478 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365491 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365504 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365532 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365546 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365559 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365573 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365588 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365602 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365614 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365629 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365642 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365654 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365666 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365682 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365695 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365707 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365720 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365764 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365780 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365793 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365807 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365820 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365834 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365852 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365864 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365878 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365892 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365906 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365935 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365947 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365959 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.365990 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.366005 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.366022 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.366035 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.366049 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.366060 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.366073 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.366085 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.366099 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.366112 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.366125 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.366138 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.366150 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.366162 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.366174 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.366188 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.366201 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.366213 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.366226 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.366239 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.366251 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.366264 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.366276 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.366289 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.366302 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.366329 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.366343 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.366355 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.366368 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.366381 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.366394 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.366408 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.366421 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.366432 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.366445 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.366459 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.366471 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.366485 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.366497 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.366509 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.366521 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.366533 4858 reconstruct.go:97] "Volume reconstruction finished" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.366542 4858 reconciler.go:26] "Reconciler: start to sync state" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.380675 4858 manager.go:324] Recovery completed Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.391570 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.393742 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.393848 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.393915 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.395062 4858 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.395153 4858 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.395221 4858 state_mem.go:36] "Initialized new in-memory state store" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.396733 4858 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.399281 4858 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.399336 4858 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.399372 4858 kubelet.go:2335] "Starting kubelet main sync loop" Feb 02 17:15:00 crc kubenswrapper[4858]: E0202 17:15:00.399545 4858 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.404931 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Feb 02 17:15:00 crc kubenswrapper[4858]: E0202 17:15:00.405009 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.418162 4858 policy_none.go:49] "None policy: Start" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.419160 4858 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.419190 4858 state_mem.go:35] "Initializing new in-memory state store" Feb 02 17:15:00 crc kubenswrapper[4858]: E0202 17:15:00.436434 4858 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.472363 4858 manager.go:334] "Starting Device Plugin manager" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.472427 4858 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.472441 4858 server.go:79] "Starting device plugin registration server" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.472886 4858 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.472906 4858 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.473070 4858 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.473219 4858 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.473234 4858 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 02 17:15:00 crc kubenswrapper[4858]: E0202 17:15:00.479366 4858 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.499724 4858 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.499873 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.501620 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.501670 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.501685 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.501898 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.502184 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.502259 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.503208 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.503235 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.503243 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.503416 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.503476 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.503491 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.503817 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.503941 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.503983 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.504906 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.504941 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.504952 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.505132 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.505160 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.505169 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.505278 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.505512 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.505666 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.506427 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.506456 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.506504 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.506680 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.506945 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.507008 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.507039 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.507118 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.507132 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.508299 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.508325 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.508339 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.508324 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.508437 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.508448 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.508498 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.508545 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.509514 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.509552 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.509562 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:00 crc kubenswrapper[4858]: E0202 17:15:00.543848 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" interval="400ms" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.569301 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.569352 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.569380 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.569403 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.569426 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.569525 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.569595 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.569627 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.569650 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.569675 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.569702 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.569733 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.569757 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.569777 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.569810 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.573274 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.574729 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.574756 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.574764 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.574809 4858 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 17:15:00 crc kubenswrapper[4858]: E0202 17:15:00.575200 4858 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.13:6443: connect: connection refused" node="crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.671496 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.671604 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.671634 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.671660 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.671686 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.671709 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.671733 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.671786 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.671811 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.671861 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.671912 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.671938 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.671963 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.671965 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.672073 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.672101 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.672133 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.672215 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.672248 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.672296 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.672345 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.672354 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.672378 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.672266 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.672376 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.672425 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.672411 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.672427 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.672444 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.672599 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.775805 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.777613 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.777686 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.777703 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.777747 4858 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 17:15:00 crc kubenswrapper[4858]: E0202 17:15:00.778471 4858 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.13:6443: connect: connection refused" node="crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.832069 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.852889 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.876944 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.889315 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-38b498abcbf7f4aa33ac281dca759eca0e15ed1a5ff50514296c5e776a3600ad WatchSource:0}: Error finding container 38b498abcbf7f4aa33ac281dca759eca0e15ed1a5ff50514296c5e776a3600ad: Status 404 returned error can't find the container with id 38b498abcbf7f4aa33ac281dca759eca0e15ed1a5ff50514296c5e776a3600ad Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.892152 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.896892 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-dce13d0a4018f739f35c5d1f33988688f26d05a7bff83377dfa1a90c5bc29aff WatchSource:0}: Error finding container dce13d0a4018f739f35c5d1f33988688f26d05a7bff83377dfa1a90c5bc29aff: Status 404 returned error can't find the container with id dce13d0a4018f739f35c5d1f33988688f26d05a7bff83377dfa1a90c5bc29aff Feb 02 17:15:00 crc kubenswrapper[4858]: I0202 17:15:00.901209 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.913547 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-a032b7b727a3f196e25b215a6c8fa10c46dabded07c57a7b9ff739c6800b6de1 WatchSource:0}: Error finding container a032b7b727a3f196e25b215a6c8fa10c46dabded07c57a7b9ff739c6800b6de1: Status 404 returned error can't find the container with id a032b7b727a3f196e25b215a6c8fa10c46dabded07c57a7b9ff739c6800b6de1 Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.920116 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-719182c2003875797c17df3457936c349a17a604399e1262cc6f4d37616754b1 WatchSource:0}: Error finding container 719182c2003875797c17df3457936c349a17a604399e1262cc6f4d37616754b1: Status 404 returned error can't find the container with id 719182c2003875797c17df3457936c349a17a604399e1262cc6f4d37616754b1 Feb 02 17:15:00 crc kubenswrapper[4858]: W0202 17:15:00.926599 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-e69fed6a9f41a7366105ada5cc71df13cc04e0e57ad0b0fa5b50622c320bb259 WatchSource:0}: Error finding container e69fed6a9f41a7366105ada5cc71df13cc04e0e57ad0b0fa5b50622c320bb259: Status 404 returned error can't find the container with id e69fed6a9f41a7366105ada5cc71df13cc04e0e57ad0b0fa5b50622c320bb259 Feb 02 17:15:00 crc kubenswrapper[4858]: E0202 17:15:00.945448 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" interval="800ms" Feb 02 17:15:01 crc kubenswrapper[4858]: I0202 17:15:01.178875 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:01 crc kubenswrapper[4858]: I0202 17:15:01.180375 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:01 crc kubenswrapper[4858]: I0202 17:15:01.180431 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:01 crc kubenswrapper[4858]: I0202 17:15:01.180444 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:01 crc kubenswrapper[4858]: I0202 17:15:01.180474 4858 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 17:15:01 crc kubenswrapper[4858]: E0202 17:15:01.181029 4858 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.13:6443: connect: connection refused" node="crc" Feb 02 17:15:01 crc kubenswrapper[4858]: I0202 17:15:01.323213 4858 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Feb 02 17:15:01 crc kubenswrapper[4858]: I0202 17:15:01.336319 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 00:21:54.21416919 +0000 UTC Feb 02 17:15:01 crc kubenswrapper[4858]: I0202 17:15:01.403889 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"38b498abcbf7f4aa33ac281dca759eca0e15ed1a5ff50514296c5e776a3600ad"} Feb 02 17:15:01 crc kubenswrapper[4858]: I0202 17:15:01.405780 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"dce13d0a4018f739f35c5d1f33988688f26d05a7bff83377dfa1a90c5bc29aff"} Feb 02 17:15:01 crc kubenswrapper[4858]: I0202 17:15:01.407350 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e69fed6a9f41a7366105ada5cc71df13cc04e0e57ad0b0fa5b50622c320bb259"} Feb 02 17:15:01 crc kubenswrapper[4858]: I0202 17:15:01.408642 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"719182c2003875797c17df3457936c349a17a604399e1262cc6f4d37616754b1"} Feb 02 17:15:01 crc kubenswrapper[4858]: I0202 17:15:01.410241 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a032b7b727a3f196e25b215a6c8fa10c46dabded07c57a7b9ff739c6800b6de1"} Feb 02 17:15:01 crc kubenswrapper[4858]: W0202 17:15:01.540456 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Feb 02 17:15:01 crc kubenswrapper[4858]: E0202 17:15:01.540571 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Feb 02 17:15:01 crc kubenswrapper[4858]: W0202 17:15:01.563573 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Feb 02 17:15:01 crc kubenswrapper[4858]: E0202 17:15:01.563648 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Feb 02 17:15:01 crc kubenswrapper[4858]: W0202 17:15:01.634319 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Feb 02 17:15:01 crc kubenswrapper[4858]: E0202 17:15:01.634502 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Feb 02 17:15:01 crc kubenswrapper[4858]: W0202 17:15:01.747271 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Feb 02 17:15:01 crc kubenswrapper[4858]: E0202 17:15:01.747367 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Feb 02 17:15:01 crc kubenswrapper[4858]: E0202 17:15:01.747453 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" interval="1.6s" Feb 02 17:15:01 crc kubenswrapper[4858]: I0202 17:15:01.981512 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:01 crc kubenswrapper[4858]: I0202 17:15:01.983493 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:01 crc kubenswrapper[4858]: I0202 17:15:01.983560 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:01 crc kubenswrapper[4858]: I0202 17:15:01.983577 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:01 crc kubenswrapper[4858]: I0202 17:15:01.983634 4858 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 17:15:01 crc kubenswrapper[4858]: E0202 17:15:01.984517 4858 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.13:6443: connect: connection refused" node="crc" Feb 02 17:15:02 crc kubenswrapper[4858]: E0202 17:15:02.107337 4858 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.13:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18907d5ef36aedb9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 17:15:00.320873913 +0000 UTC m=+1.473289198,LastTimestamp:2026-02-02 17:15:00.320873913 +0000 UTC m=+1.473289198,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 17:15:02 crc kubenswrapper[4858]: I0202 17:15:02.265861 4858 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 02 17:15:02 crc kubenswrapper[4858]: E0202 17:15:02.267726 4858 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Feb 02 17:15:02 crc kubenswrapper[4858]: I0202 17:15:02.323122 4858 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Feb 02 17:15:02 crc kubenswrapper[4858]: I0202 17:15:02.336433 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 12:34:59.186134342 +0000 UTC Feb 02 17:15:02 crc kubenswrapper[4858]: I0202 17:15:02.415158 4858 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="370e43e433fb3658a272ea20e7cb648bcbf3c309ff85f6e430e5773e15f5aa6b" exitCode=0 Feb 02 17:15:02 crc kubenswrapper[4858]: I0202 17:15:02.415265 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"370e43e433fb3658a272ea20e7cb648bcbf3c309ff85f6e430e5773e15f5aa6b"} Feb 02 17:15:02 crc kubenswrapper[4858]: I0202 17:15:02.415331 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:02 crc kubenswrapper[4858]: I0202 17:15:02.416538 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:02 crc kubenswrapper[4858]: I0202 17:15:02.416568 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:02 crc kubenswrapper[4858]: I0202 17:15:02.416580 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:02 crc kubenswrapper[4858]: I0202 17:15:02.418178 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3d64509908882d8a14b1836f1c667e92c12f407f5108009c8a63dd841d034afb"} Feb 02 17:15:02 crc kubenswrapper[4858]: I0202 17:15:02.418255 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7"} Feb 02 17:15:02 crc kubenswrapper[4858]: I0202 17:15:02.420618 4858 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75" exitCode=0 Feb 02 17:15:02 crc kubenswrapper[4858]: I0202 17:15:02.420686 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75"} Feb 02 17:15:02 crc kubenswrapper[4858]: I0202 17:15:02.420799 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:02 crc kubenswrapper[4858]: I0202 17:15:02.421673 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:02 crc kubenswrapper[4858]: I0202 17:15:02.421705 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:02 crc kubenswrapper[4858]: I0202 17:15:02.421718 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:02 crc kubenswrapper[4858]: I0202 17:15:02.422889 4858 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa" exitCode=0 Feb 02 17:15:02 crc kubenswrapper[4858]: I0202 17:15:02.422963 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa"} Feb 02 17:15:02 crc kubenswrapper[4858]: I0202 17:15:02.423181 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:02 crc kubenswrapper[4858]: I0202 17:15:02.424307 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:02 crc kubenswrapper[4858]: I0202 17:15:02.424334 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:02 crc kubenswrapper[4858]: I0202 17:15:02.424346 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:02 crc kubenswrapper[4858]: I0202 17:15:02.425959 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:02 crc kubenswrapper[4858]: I0202 17:15:02.426180 4858 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="39f7b0faa29b8fae2cba1cf6ffa6b5ef1761f8a36172383eef6aa214a2c0d881" exitCode=0 Feb 02 17:15:02 crc kubenswrapper[4858]: I0202 17:15:02.426201 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"39f7b0faa29b8fae2cba1cf6ffa6b5ef1761f8a36172383eef6aa214a2c0d881"} Feb 02 17:15:02 crc kubenswrapper[4858]: I0202 17:15:02.426278 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:02 crc kubenswrapper[4858]: I0202 17:15:02.427182 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:02 crc kubenswrapper[4858]: I0202 17:15:02.427201 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:02 crc kubenswrapper[4858]: I0202 17:15:02.427210 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:02 crc kubenswrapper[4858]: I0202 17:15:02.427291 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:02 crc kubenswrapper[4858]: I0202 17:15:02.427329 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:02 crc kubenswrapper[4858]: I0202 17:15:02.427347 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:03 crc kubenswrapper[4858]: W0202 17:15:03.278537 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Feb 02 17:15:03 crc kubenswrapper[4858]: E0202 17:15:03.278628 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Feb 02 17:15:03 crc kubenswrapper[4858]: I0202 17:15:03.323885 4858 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Feb 02 17:15:03 crc kubenswrapper[4858]: I0202 17:15:03.337314 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 00:37:22.520324679 +0000 UTC Feb 02 17:15:03 crc kubenswrapper[4858]: E0202 17:15:03.348965 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" interval="3.2s" Feb 02 17:15:03 crc kubenswrapper[4858]: I0202 17:15:03.431038 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2e7c56ee2219f48c1296c4cb2e3cc2d390921a11d0faaf18ebcf1365dff431c7"} Feb 02 17:15:03 crc kubenswrapper[4858]: I0202 17:15:03.431101 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8c9312c3670640a2f0b63e269409ac820988ce1bac655c3a63f49c02fa88afd3"} Feb 02 17:15:03 crc kubenswrapper[4858]: I0202 17:15:03.431118 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d719263f90620c27bbf86fa46cc1140fd71aa3376d2c569ad8d07ff3e806ec1c"} Feb 02 17:15:03 crc kubenswrapper[4858]: I0202 17:15:03.431101 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:03 crc kubenswrapper[4858]: I0202 17:15:03.432076 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:03 crc kubenswrapper[4858]: I0202 17:15:03.432109 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:03 crc kubenswrapper[4858]: I0202 17:15:03.432118 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:03 crc kubenswrapper[4858]: I0202 17:15:03.433627 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"71ae002ad3b0e97ea0ba6f6c7a1bee332c5a1644dabbe1af5ccf1b9325356c77"} Feb 02 17:15:03 crc kubenswrapper[4858]: I0202 17:15:03.433661 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fe23a7d9f061976dbd948d705fa9d73bd8a5f9f01304c4d966fcc4501d84cac7"} Feb 02 17:15:03 crc kubenswrapper[4858]: I0202 17:15:03.433716 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:03 crc kubenswrapper[4858]: I0202 17:15:03.434505 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:03 crc kubenswrapper[4858]: I0202 17:15:03.434550 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:03 crc kubenswrapper[4858]: I0202 17:15:03.434566 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:03 crc kubenswrapper[4858]: I0202 17:15:03.440954 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8"} Feb 02 17:15:03 crc kubenswrapper[4858]: I0202 17:15:03.441579 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e"} Feb 02 17:15:03 crc kubenswrapper[4858]: I0202 17:15:03.441705 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903"} Feb 02 17:15:03 crc kubenswrapper[4858]: I0202 17:15:03.441776 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea"} Feb 02 17:15:03 crc kubenswrapper[4858]: I0202 17:15:03.443281 4858 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04" exitCode=0 Feb 02 17:15:03 crc kubenswrapper[4858]: I0202 17:15:03.443465 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04"} Feb 02 17:15:03 crc kubenswrapper[4858]: I0202 17:15:03.443760 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:03 crc kubenswrapper[4858]: I0202 17:15:03.444858 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:03 crc kubenswrapper[4858]: I0202 17:15:03.444996 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:03 crc kubenswrapper[4858]: I0202 17:15:03.445068 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:03 crc kubenswrapper[4858]: I0202 17:15:03.447449 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"28be335257f0bb593dbdf82cf32dd5da94532c27adb666a282caf1b7bc13b496"} Feb 02 17:15:03 crc kubenswrapper[4858]: I0202 17:15:03.447539 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:03 crc kubenswrapper[4858]: I0202 17:15:03.448785 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:03 crc kubenswrapper[4858]: I0202 17:15:03.448946 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:03 crc kubenswrapper[4858]: I0202 17:15:03.449028 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:03 crc kubenswrapper[4858]: I0202 17:15:03.585236 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:03 crc kubenswrapper[4858]: I0202 17:15:03.590606 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:03 crc kubenswrapper[4858]: I0202 17:15:03.590656 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:03 crc kubenswrapper[4858]: I0202 17:15:03.590686 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:03 crc kubenswrapper[4858]: I0202 17:15:03.590716 4858 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 17:15:03 crc kubenswrapper[4858]: E0202 17:15:03.591325 4858 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.13:6443: connect: connection refused" node="crc" Feb 02 17:15:03 crc kubenswrapper[4858]: W0202 17:15:03.837475 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Feb 02 17:15:03 crc kubenswrapper[4858]: E0202 17:15:03.837765 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Feb 02 17:15:03 crc kubenswrapper[4858]: W0202 17:15:03.918903 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.13:6443: connect: connection refused Feb 02 17:15:03 crc kubenswrapper[4858]: E0202 17:15:03.919013 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.13:6443: connect: connection refused" logger="UnhandledError" Feb 02 17:15:04 crc kubenswrapper[4858]: I0202 17:15:04.338232 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 22:32:28.043840255 +0000 UTC Feb 02 17:15:04 crc kubenswrapper[4858]: I0202 17:15:04.455662 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87"} Feb 02 17:15:04 crc kubenswrapper[4858]: I0202 17:15:04.455807 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:04 crc kubenswrapper[4858]: I0202 17:15:04.456787 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:04 crc kubenswrapper[4858]: I0202 17:15:04.456831 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:04 crc kubenswrapper[4858]: I0202 17:15:04.456845 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:04 crc kubenswrapper[4858]: I0202 17:15:04.458869 4858 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094" exitCode=0 Feb 02 17:15:04 crc kubenswrapper[4858]: I0202 17:15:04.458945 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:04 crc kubenswrapper[4858]: I0202 17:15:04.459011 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 17:15:04 crc kubenswrapper[4858]: I0202 17:15:04.459049 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:04 crc kubenswrapper[4858]: I0202 17:15:04.459109 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:04 crc kubenswrapper[4858]: I0202 17:15:04.459543 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094"} Feb 02 17:15:04 crc kubenswrapper[4858]: I0202 17:15:04.459057 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:04 crc kubenswrapper[4858]: I0202 17:15:04.460081 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:04 crc kubenswrapper[4858]: I0202 17:15:04.460107 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:04 crc kubenswrapper[4858]: I0202 17:15:04.460118 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:04 crc kubenswrapper[4858]: I0202 17:15:04.460551 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:04 crc kubenswrapper[4858]: I0202 17:15:04.460577 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:04 crc kubenswrapper[4858]: I0202 17:15:04.460582 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:04 crc kubenswrapper[4858]: I0202 17:15:04.460597 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:04 crc kubenswrapper[4858]: I0202 17:15:04.460608 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:04 crc kubenswrapper[4858]: I0202 17:15:04.460587 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:04 crc kubenswrapper[4858]: I0202 17:15:04.461275 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:04 crc kubenswrapper[4858]: I0202 17:15:04.461300 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:04 crc kubenswrapper[4858]: I0202 17:15:04.461312 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:04 crc kubenswrapper[4858]: I0202 17:15:04.772158 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 17:15:05 crc kubenswrapper[4858]: I0202 17:15:05.249160 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 17:15:05 crc kubenswrapper[4858]: I0202 17:15:05.338800 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 02:00:18.526887951 +0000 UTC Feb 02 17:15:05 crc kubenswrapper[4858]: I0202 17:15:05.470285 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"98e87a8afe124154c232c26f9a43969189b865225c0162799857847f10af1257"} Feb 02 17:15:05 crc kubenswrapper[4858]: I0202 17:15:05.470364 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"845157900bb10cd29ec008f561118df2cbfcc30684394dbc201ad8a966aa8e49"} Feb 02 17:15:05 crc kubenswrapper[4858]: I0202 17:15:05.470371 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:05 crc kubenswrapper[4858]: I0202 17:15:05.470380 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"925eeba4bab6bf2efe66b7e9bcc09451b9dd26cb3dc457fe4432b134ed68dfad"} Feb 02 17:15:05 crc kubenswrapper[4858]: I0202 17:15:05.470398 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c692114482fe72631826c6af5eb14c5ef0fdfc5500ed0ea3fd96d6009d5be562"} Feb 02 17:15:05 crc kubenswrapper[4858]: I0202 17:15:05.470613 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:05 crc kubenswrapper[4858]: I0202 17:15:05.471203 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:05 crc kubenswrapper[4858]: I0202 17:15:05.471228 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:05 crc kubenswrapper[4858]: I0202 17:15:05.471246 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:05 crc kubenswrapper[4858]: I0202 17:15:05.471821 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:05 crc kubenswrapper[4858]: I0202 17:15:05.471840 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:05 crc kubenswrapper[4858]: I0202 17:15:05.471849 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:06 crc kubenswrapper[4858]: I0202 17:15:06.339402 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 15:54:19.838896861 +0000 UTC Feb 02 17:15:06 crc kubenswrapper[4858]: I0202 17:15:06.479401 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:06 crc kubenswrapper[4858]: I0202 17:15:06.479403 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"16072ce1c24706c267a0341779f05e661bac827ae871ecc732f81923bb63bc99"} Feb 02 17:15:06 crc kubenswrapper[4858]: I0202 17:15:06.479404 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:06 crc kubenswrapper[4858]: I0202 17:15:06.480620 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:06 crc kubenswrapper[4858]: I0202 17:15:06.481084 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:06 crc kubenswrapper[4858]: I0202 17:15:06.481290 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:06 crc kubenswrapper[4858]: I0202 17:15:06.481320 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:06 crc kubenswrapper[4858]: I0202 17:15:06.481339 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:06 crc kubenswrapper[4858]: I0202 17:15:06.481369 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:06 crc kubenswrapper[4858]: I0202 17:15:06.654383 4858 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 02 17:15:06 crc kubenswrapper[4858]: I0202 17:15:06.792075 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:06 crc kubenswrapper[4858]: I0202 17:15:06.794104 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:06 crc kubenswrapper[4858]: I0202 17:15:06.794171 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:06 crc kubenswrapper[4858]: I0202 17:15:06.794197 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:06 crc kubenswrapper[4858]: I0202 17:15:06.794237 4858 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 17:15:07 crc kubenswrapper[4858]: I0202 17:15:07.339650 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 11:14:24.595643901 +0000 UTC Feb 02 17:15:07 crc kubenswrapper[4858]: I0202 17:15:07.482381 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:07 crc kubenswrapper[4858]: I0202 17:15:07.483729 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:07 crc kubenswrapper[4858]: I0202 17:15:07.483786 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:07 crc kubenswrapper[4858]: I0202 17:15:07.483804 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:07 crc kubenswrapper[4858]: I0202 17:15:07.872044 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 17:15:07 crc kubenswrapper[4858]: I0202 17:15:07.872305 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:07 crc kubenswrapper[4858]: I0202 17:15:07.873957 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:07 crc kubenswrapper[4858]: I0202 17:15:07.874056 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:07 crc kubenswrapper[4858]: I0202 17:15:07.874074 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:07 crc kubenswrapper[4858]: I0202 17:15:07.881765 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 17:15:08 crc kubenswrapper[4858]: I0202 17:15:08.125272 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 17:15:08 crc kubenswrapper[4858]: I0202 17:15:08.125543 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:08 crc kubenswrapper[4858]: I0202 17:15:08.127162 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:08 crc kubenswrapper[4858]: I0202 17:15:08.127234 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:08 crc kubenswrapper[4858]: I0202 17:15:08.127253 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:08 crc kubenswrapper[4858]: I0202 17:15:08.147007 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 17:15:08 crc kubenswrapper[4858]: I0202 17:15:08.340060 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 16:04:57.179392229 +0000 UTC Feb 02 17:15:08 crc kubenswrapper[4858]: I0202 17:15:08.451420 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 17:15:08 crc kubenswrapper[4858]: I0202 17:15:08.485081 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:08 crc kubenswrapper[4858]: I0202 17:15:08.485092 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:08 crc kubenswrapper[4858]: I0202 17:15:08.486447 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:08 crc kubenswrapper[4858]: I0202 17:15:08.486494 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:08 crc kubenswrapper[4858]: I0202 17:15:08.486512 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:08 crc kubenswrapper[4858]: I0202 17:15:08.486511 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:08 crc kubenswrapper[4858]: I0202 17:15:08.486606 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:08 crc kubenswrapper[4858]: I0202 17:15:08.486617 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:08 crc kubenswrapper[4858]: I0202 17:15:08.493641 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 17:15:08 crc kubenswrapper[4858]: I0202 17:15:08.864486 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 17:15:09 crc kubenswrapper[4858]: I0202 17:15:09.340690 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 02:44:43.450190772 +0000 UTC Feb 02 17:15:09 crc kubenswrapper[4858]: I0202 17:15:09.489851 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:09 crc kubenswrapper[4858]: I0202 17:15:09.491448 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:09 crc kubenswrapper[4858]: I0202 17:15:09.491513 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:09 crc kubenswrapper[4858]: I0202 17:15:09.491537 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:10 crc kubenswrapper[4858]: I0202 17:15:10.065219 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 02 17:15:10 crc kubenswrapper[4858]: I0202 17:15:10.065471 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:10 crc kubenswrapper[4858]: I0202 17:15:10.067102 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:10 crc kubenswrapper[4858]: I0202 17:15:10.067180 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:10 crc kubenswrapper[4858]: I0202 17:15:10.067203 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:10 crc kubenswrapper[4858]: I0202 17:15:10.340943 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 19:12:57.190615518 +0000 UTC Feb 02 17:15:10 crc kubenswrapper[4858]: E0202 17:15:10.479513 4858 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 02 17:15:10 crc kubenswrapper[4858]: I0202 17:15:10.491375 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:10 crc kubenswrapper[4858]: I0202 17:15:10.492243 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:10 crc kubenswrapper[4858]: I0202 17:15:10.492304 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:10 crc kubenswrapper[4858]: I0202 17:15:10.492328 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:11 crc kubenswrapper[4858]: I0202 17:15:11.341728 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 11:15:17.86524238 +0000 UTC Feb 02 17:15:11 crc kubenswrapper[4858]: I0202 17:15:11.452112 4858 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 17:15:11 crc kubenswrapper[4858]: I0202 17:15:11.452237 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 17:15:12 crc kubenswrapper[4858]: I0202 17:15:12.341880 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 16:37:09.033210086 +0000 UTC Feb 02 17:15:13 crc kubenswrapper[4858]: I0202 17:15:13.316571 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 02 17:15:13 crc kubenswrapper[4858]: I0202 17:15:13.316884 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:13 crc kubenswrapper[4858]: I0202 17:15:13.318413 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:13 crc kubenswrapper[4858]: I0202 17:15:13.318472 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:13 crc kubenswrapper[4858]: I0202 17:15:13.318494 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:13 crc kubenswrapper[4858]: I0202 17:15:13.342785 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 00:38:58.669077902 +0000 UTC Feb 02 17:15:14 crc kubenswrapper[4858]: I0202 17:15:14.324870 4858 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 02 17:15:14 crc kubenswrapper[4858]: I0202 17:15:14.343153 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 22:44:28.109010497 +0000 UTC Feb 02 17:15:14 crc kubenswrapper[4858]: I0202 17:15:14.383091 4858 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 02 17:15:14 crc kubenswrapper[4858]: I0202 17:15:14.383159 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 02 17:15:14 crc kubenswrapper[4858]: I0202 17:15:14.394579 4858 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 02 17:15:14 crc kubenswrapper[4858]: I0202 17:15:14.394858 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 02 17:15:15 crc kubenswrapper[4858]: I0202 17:15:15.343930 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 03:09:09.781917125 +0000 UTC Feb 02 17:15:16 crc kubenswrapper[4858]: I0202 17:15:16.344440 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 19:15:35.655790744 +0000 UTC Feb 02 17:15:17 crc kubenswrapper[4858]: I0202 17:15:17.345380 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 09:05:17.234050358 +0000 UTC Feb 02 17:15:18 crc kubenswrapper[4858]: I0202 17:15:18.157314 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 17:15:18 crc kubenswrapper[4858]: I0202 17:15:18.157545 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:18 crc kubenswrapper[4858]: I0202 17:15:18.159689 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:18 crc kubenswrapper[4858]: I0202 17:15:18.159725 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:18 crc kubenswrapper[4858]: I0202 17:15:18.159737 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:18 crc kubenswrapper[4858]: I0202 17:15:18.163700 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 17:15:18 crc kubenswrapper[4858]: I0202 17:15:18.346286 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 03:05:05.00941059 +0000 UTC Feb 02 17:15:18 crc kubenswrapper[4858]: I0202 17:15:18.500961 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 17:15:18 crc kubenswrapper[4858]: I0202 17:15:18.501259 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:18 crc kubenswrapper[4858]: I0202 17:15:18.502900 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:18 crc kubenswrapper[4858]: I0202 17:15:18.502969 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:18 crc kubenswrapper[4858]: I0202 17:15:18.503018 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:18 crc kubenswrapper[4858]: I0202 17:15:18.509729 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 17:15:18 crc kubenswrapper[4858]: I0202 17:15:18.509789 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:18 crc kubenswrapper[4858]: I0202 17:15:18.511127 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:18 crc kubenswrapper[4858]: I0202 17:15:18.511195 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:18 crc kubenswrapper[4858]: I0202 17:15:18.511225 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:19 crc kubenswrapper[4858]: I0202 17:15:19.347404 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 21:19:07.561865783 +0000 UTC Feb 02 17:15:19 crc kubenswrapper[4858]: E0202 17:15:19.384871 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 02 17:15:19 crc kubenswrapper[4858]: I0202 17:15:19.386761 4858 trace.go:236] Trace[2032788562]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 17:15:08.991) (total time: 10394ms): Feb 02 17:15:19 crc kubenswrapper[4858]: Trace[2032788562]: ---"Objects listed" error: 10394ms (17:15:19.386) Feb 02 17:15:19 crc kubenswrapper[4858]: Trace[2032788562]: [10.394786596s] [10.394786596s] END Feb 02 17:15:19 crc kubenswrapper[4858]: I0202 17:15:19.386781 4858 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 02 17:15:19 crc kubenswrapper[4858]: I0202 17:15:19.388869 4858 trace.go:236] Trace[235090395]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 17:15:07.673) (total time: 11715ms): Feb 02 17:15:19 crc kubenswrapper[4858]: Trace[235090395]: ---"Objects listed" error: 11715ms (17:15:19.388) Feb 02 17:15:19 crc kubenswrapper[4858]: Trace[235090395]: [11.715154576s] [11.715154576s] END Feb 02 17:15:19 crc kubenswrapper[4858]: I0202 17:15:19.388915 4858 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 02 17:15:19 crc kubenswrapper[4858]: E0202 17:15:19.389920 4858 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 02 17:15:19 crc kubenswrapper[4858]: I0202 17:15:19.392624 4858 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 02 17:15:19 crc kubenswrapper[4858]: I0202 17:15:19.393025 4858 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 02 17:15:19 crc kubenswrapper[4858]: I0202 17:15:19.393034 4858 trace.go:236] Trace[1288078583]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 17:15:04.814) (total time: 14578ms): Feb 02 17:15:19 crc kubenswrapper[4858]: Trace[1288078583]: ---"Objects listed" error: 14578ms (17:15:19.392) Feb 02 17:15:19 crc kubenswrapper[4858]: Trace[1288078583]: [14.578519379s] [14.578519379s] END Feb 02 17:15:19 crc kubenswrapper[4858]: I0202 17:15:19.393061 4858 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 02 17:15:19 crc kubenswrapper[4858]: I0202 17:15:19.403131 4858 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 02 17:15:19 crc kubenswrapper[4858]: I0202 17:15:19.427609 4858 csr.go:261] certificate signing request csr-bj7bx is approved, waiting to be issued Feb 02 17:15:19 crc kubenswrapper[4858]: I0202 17:15:19.439876 4858 csr.go:257] certificate signing request csr-bj7bx is issued Feb 02 17:15:19 crc kubenswrapper[4858]: I0202 17:15:19.495396 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 17:15:19 crc kubenswrapper[4858]: I0202 17:15:19.503051 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 17:15:19 crc kubenswrapper[4858]: I0202 17:15:19.656792 4858 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49688->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 02 17:15:19 crc kubenswrapper[4858]: I0202 17:15:19.656851 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49688->192.168.126.11:17697: read: connection reset by peer" Feb 02 17:15:19 crc kubenswrapper[4858]: I0202 17:15:19.656798 4858 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49682->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 02 17:15:19 crc kubenswrapper[4858]: I0202 17:15:19.656910 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:49682->192.168.126.11:17697: read: connection reset by peer" Feb 02 17:15:19 crc kubenswrapper[4858]: I0202 17:15:19.657223 4858 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 02 17:15:19 crc kubenswrapper[4858]: I0202 17:15:19.657292 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.187320 4858 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 02 17:15:20 crc kubenswrapper[4858]: W0202 17:15:20.187842 4858 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 02 17:15:20 crc kubenswrapper[4858]: W0202 17:15:20.187861 4858 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 02 17:15:20 crc kubenswrapper[4858]: W0202 17:15:20.187889 4858 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 02 17:15:20 crc kubenswrapper[4858]: W0202 17:15:20.187957 4858 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.319904 4858 apiserver.go:52] "Watching apiserver" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.329289 4858 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.329683 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-6hxtm","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.330219 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.330263 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 17:15:20 crc kubenswrapper[4858]: E0202 17:15:20.330294 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.330365 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:15:20 crc kubenswrapper[4858]: E0202 17:15:20.330603 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.330956 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.331538 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.331716 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.331809 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-6hxtm" Feb 02 17:15:20 crc kubenswrapper[4858]: E0202 17:15:20.331820 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.334035 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.334458 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.334549 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.335070 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.335113 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.335894 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.336441 4858 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.336603 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.336605 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.336888 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.337061 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.336967 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.337671 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.347946 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 21:38:51.08517119 +0000 UTC Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.357809 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.369756 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.381241 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.394301 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.397591 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.397637 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.397671 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.397692 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.397730 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.397751 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.397772 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.397795 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.397813 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.397845 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.397868 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.397920 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.397944 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.398010 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.398028 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.398044 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.398060 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.398079 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.398123 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.398141 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.398158 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.398776 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.398960 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.399011 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.399030 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.399056 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.399079 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.399097 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.398106 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: E0202 17:15:20.399939 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:15:20.899906807 +0000 UTC m=+22.052322072 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.398134 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.399948 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.398733 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.398914 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.398950 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.399396 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.400262 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.399517 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.399578 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.399602 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.399714 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.399732 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.399736 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.399879 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.400402 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.400559 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.400638 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.400712 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.400772 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.400634 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.401022 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.401031 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.401075 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.399113 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.401366 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.401418 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.401454 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.401592 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.401696 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.401761 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.402021 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.402070 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.402082 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.402106 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.402140 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.402176 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.402207 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.402238 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.402268 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.402347 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.402382 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.402394 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.402411 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.402445 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.402482 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.402508 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.402543 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.402578 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.402613 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.402642 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.402675 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.402688 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.402710 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.402740 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.402771 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.402771 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.402801 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.402837 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.402867 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.402902 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.402942 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.402955 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.402986 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.403023 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.403059 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.403085 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.403113 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.403146 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.403181 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.403219 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.403244 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.403201 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.403274 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.403273 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.403311 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.403343 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.403369 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.403408 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.403393 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.403439 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.403463 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.403493 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.403482 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.403524 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.403548 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.403615 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.403667 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.403715 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.403622 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.403839 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.403928 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.404033 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.404093 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.404165 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.404230 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.404283 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.404343 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.404405 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.404462 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.404508 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.404566 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.404628 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.404681 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.404743 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.404808 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.404862 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.404929 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.405048 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.405127 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.405188 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.405156 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.405294 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.405258 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.405444 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.405507 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.405548 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.405581 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.405610 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.405652 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.405680 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.405707 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.405737 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.405769 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.405795 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.405830 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.405859 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.405887 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.405917 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.405946 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.406000 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.406032 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.406250 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.406316 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.406828 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.406357 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.406859 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.406884 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.406899 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.406371 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.406349 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.406466 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.406538 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.406920 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.407080 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.407135 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.407215 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.407284 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.407320 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.407608 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.407632 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.407818 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.407830 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.408284 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.408290 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.408298 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.408356 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.408454 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.408498 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.408700 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.409352 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.409417 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.409423 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.409459 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.409535 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.409534 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.409576 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.409622 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.409665 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.409793 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.410073 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.410207 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.409660 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.413131 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.413225 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.413323 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.413388 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.413594 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.413874 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.410152 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.413963 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.410388 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.414107 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.414114 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.414152 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.414193 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.414231 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.414233 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.414215 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.414267 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.414321 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.414355 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.414373 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.414340 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.414455 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.413448 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.414672 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.414807 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.414829 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.415796 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.416342 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.416470 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.416364 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.416602 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.417113 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.417158 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.414389 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.417548 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.417589 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.417594 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.417628 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.417660 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.417698 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.417737 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.417773 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.417806 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.417841 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.417872 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.417906 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.417940 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.417998 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.418071 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.418106 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.418141 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.418169 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.418197 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.418228 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.418266 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.418300 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.418326 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.418348 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.418378 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.418412 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.418446 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.418495 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.419163 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.419283 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.419325 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.419367 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.419402 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.419435 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.419464 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.419497 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.419529 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.419560 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.419596 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.419629 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.419662 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.419694 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.419726 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.419759 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.419793 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.419828 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.419860 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.419895 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.419938 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.419969 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.420023 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.420090 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.420130 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.420162 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.420195 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.420232 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.420267 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.420301 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9f96e711-13fa-4105-b042-45fe046d3d35-hosts-file\") pod \"node-resolver-6hxtm\" (UID: \"9f96e711-13fa-4105-b042-45fe046d3d35\") " pod="openshift-dns/node-resolver-6hxtm" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.420333 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.420681 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.420731 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.420771 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.420808 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlbtc\" (UniqueName: \"kubernetes.io/projected/9f96e711-13fa-4105-b042-45fe046d3d35-kube-api-access-zlbtc\") pod \"node-resolver-6hxtm\" (UID: \"9f96e711-13fa-4105-b042-45fe046d3d35\") " pod="openshift-dns/node-resolver-6hxtm" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.420839 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.420874 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.420912 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.420948 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421004 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421082 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421105 4858 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421127 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421148 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421166 4858 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421182 4858 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421200 4858 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421216 4858 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421232 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421251 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421271 4858 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421288 4858 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421305 4858 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421322 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421339 4858 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421356 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421374 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421390 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421408 4858 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421425 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421444 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421462 4858 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421504 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421521 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421537 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421600 4858 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421618 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421638 4858 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421655 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421674 4858 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421692 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421710 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421726 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421744 4858 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421762 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421779 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421798 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421816 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421834 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421850 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421868 4858 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421885 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421901 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421917 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421936 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421954 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421994 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.422015 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.422031 4858 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.422047 4858 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.422065 4858 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.422084 4858 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.422100 4858 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.422117 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.422133 4858 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.422150 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.422168 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.422184 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.422202 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.422230 4858 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.417656 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.417687 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.417915 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.418012 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.418402 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.418541 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.418711 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.418793 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.419058 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.419287 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.419307 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.419378 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.419377 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.419766 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.419903 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.419945 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.420137 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.420282 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.420506 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.420900 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421202 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421357 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421404 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421499 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.421624 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.422063 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.422132 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.422224 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.422241 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.422282 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.422456 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.422681 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.422794 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.422902 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.424850 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.424917 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.420369 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6hxtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f96e711-13fa-4105-b042-45fe046d3d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlbtc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6hxtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.427403 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.427437 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.427504 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.427683 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.428821 4858 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.436218 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.436240 4858 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.436255 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.436299 4858 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.436319 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.436335 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.436348 4858 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.436391 4858 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.436407 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.436420 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.436463 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.436480 4858 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.436496 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.436509 4858 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.436539 4858 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.436554 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.436569 4858 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.436581 4858 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.436623 4858 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.436635 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.436647 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.436660 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.436695 4858 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.436710 4858 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.436722 4858 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.436734 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.436764 4858 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.436779 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.436791 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.429657 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.429431 4858 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 02 17:15:20 crc kubenswrapper[4858]: E0202 17:15:20.433303 4858 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 17:15:20 crc kubenswrapper[4858]: E0202 17:15:20.440379 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 17:15:20.940351574 +0000 UTC m=+22.092766839 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.427785 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.428061 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.428155 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.427962 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.428172 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.428185 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.428457 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: E0202 17:15:20.428603 4858 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 17:15:20 crc kubenswrapper[4858]: E0202 17:15:20.440879 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 17:15:20.940865599 +0000 UTC m=+22.093280864 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.428813 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.429353 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.429836 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.430851 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.441535 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.441680 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.441764 4858 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-02 17:10:19 +0000 UTC, rotation deadline is 2026-10-22 09:10:05.612786284 +0000 UTC Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.441821 4858 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6279h54m45.170968539s for next certificate rotation Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.443082 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.445358 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.451209 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: E0202 17:15:20.453191 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 17:15:20 crc kubenswrapper[4858]: E0202 17:15:20.453226 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 17:15:20 crc kubenswrapper[4858]: E0202 17:15:20.453239 4858 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 17:15:20 crc kubenswrapper[4858]: E0202 17:15:20.453310 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 17:15:20.953288219 +0000 UTC m=+22.105703554 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.453388 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.453648 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.453722 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.455320 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.457303 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.457663 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.457446 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.457532 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.457884 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.458218 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.458360 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.458537 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.458589 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.459792 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.463399 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52b7595-c2f6-4d7b-a076-db28e0332d49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d64509908882d8a14b1836f1c667e92c12f407f5108009c8a63dd841d034afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae002ad3b0e97ea0ba6f6c7a1bee332c5a1644dabbe1af5ccf1b9325356c77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe23a7d9f061976dbd948d705fa9d73bd8a5f9f01304c4d966fcc4501d84cac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:20 crc kubenswrapper[4858]: E0202 17:15:20.463566 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 17:15:20 crc kubenswrapper[4858]: E0202 17:15:20.463582 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 17:15:20 crc kubenswrapper[4858]: E0202 17:15:20.463594 4858 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 17:15:20 crc kubenswrapper[4858]: E0202 17:15:20.463633 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 17:15:20.963619902 +0000 UTC m=+22.116035167 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.466532 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.467419 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.469051 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.472114 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.478320 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.478485 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.478773 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.479425 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.479431 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.480411 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.481615 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.481890 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.482080 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.482416 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.482549 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.483316 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.483556 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.483702 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.484602 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.484639 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.484672 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.485732 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.485944 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.486195 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.486734 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.487351 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.487642 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.488037 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.488094 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.488456 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.490356 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.490956 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.491011 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.491136 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.491524 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.491687 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.491765 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.492180 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.493308 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.493536 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.493734 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.494791 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.494823 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.495397 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.495907 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.498554 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.499579 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.500946 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.503378 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.503398 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.504861 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.507041 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.509650 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.510000 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.510965 4858 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.512233 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.512730 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.516231 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.518379 4858 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87" exitCode=255 Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.522277 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.522754 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.524004 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.524398 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.526105 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.528944 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.529774 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.530852 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.531865 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.533060 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.533712 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.534995 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.535703 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.536872 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.537242 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.537402 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlbtc\" (UniqueName: \"kubernetes.io/projected/9f96e711-13fa-4105-b042-45fe046d3d35-kube-api-access-zlbtc\") pod \"node-resolver-6hxtm\" (UID: \"9f96e711-13fa-4105-b042-45fe046d3d35\") " pod="openshift-dns/node-resolver-6hxtm" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.537555 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.537737 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9f96e711-13fa-4105-b042-45fe046d3d35-hosts-file\") pod \"node-resolver-6hxtm\" (UID: \"9f96e711-13fa-4105-b042-45fe046d3d35\") " pod="openshift-dns/node-resolver-6hxtm" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.537865 4858 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.537935 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.538031 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.538126 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.538189 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.538252 4858 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.538323 4858 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.538402 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.537803 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.537574 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.537870 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.538463 4858 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.538894 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.537892 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9f96e711-13fa-4105-b042-45fe046d3d35-hosts-file\") pod \"node-resolver-6hxtm\" (UID: \"9f96e711-13fa-4105-b042-45fe046d3d35\") " pod="openshift-dns/node-resolver-6hxtm" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.538965 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.539156 4858 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.539345 4858 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.539430 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.540161 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.540269 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.540361 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.540437 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.540519 4858 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.540603 4858 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.540676 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.540736 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.540792 4858 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.540850 4858 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.540911 4858 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.540968 4858 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.541056 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.541110 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.541167 4858 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.542513 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.540643 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.540122 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.543714 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.544546 4858 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.544616 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.544642 4858 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.544698 4858 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.544716 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.544732 4858 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.544770 4858 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.544798 4858 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.544832 4858 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.544859 4858 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.544880 4858 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.544910 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.544931 4858 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.544951 4858 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.545004 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.545034 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.545055 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.545081 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.545102 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.545130 4858 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.545153 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.545180 4858 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.545206 4858 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.545261 4858 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.545292 4858 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.545320 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.545354 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.545376 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.545396 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.545450 4858 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.545479 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.545499 4858 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.545520 4858 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.545542 4858 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.545569 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.545589 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.545610 4858 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.545631 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.545657 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.545680 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.545702 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.545734 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.546113 4858 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.546141 4858 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.546165 4858 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.546186 4858 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.546196 4858 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.546207 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.546218 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.546232 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.546241 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.546250 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.546268 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.546281 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.546292 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.546301 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.546314 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.546323 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.546333 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.546344 4858 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.546356 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.546365 4858 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.546374 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.546411 4858 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.546452 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.546470 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.546490 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.546512 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.546526 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.546540 4858 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.546737 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.546826 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.546841 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.546900 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.547505 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.548095 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.549430 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.553054 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.553849 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.554650 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.556603 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-ct8b7"] Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.557595 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87"} Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.557797 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-ct8b7" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.557843 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.558099 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlbtc\" (UniqueName: \"kubernetes.io/projected/9f96e711-13fa-4105-b042-45fe046d3d35-kube-api-access-zlbtc\") pod \"node-resolver-6hxtm\" (UID: \"9f96e711-13fa-4105-b042-45fe046d3d35\") " pod="openshift-dns/node-resolver-6hxtm" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.560456 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.560556 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.560658 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.560777 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.567844 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.576189 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6hxtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f96e711-13fa-4105-b042-45fe046d3d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlbtc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6hxtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:20 crc kubenswrapper[4858]: E0202 17:15:20.580899 4858 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.587147 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.607401 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.618574 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.632337 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52b7595-c2f6-4d7b-a076-db28e0332d49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d64509908882d8a14b1836f1c667e92c12f407f5108009c8a63dd841d034afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae002ad3b0e97ea0ba6f6c7a1bee332c5a1644dabbe1af5ccf1b9325356c77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe23a7d9f061976dbd948d705fa9d73bd8a5f9f01304c4d966fcc4501d84cac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.644152 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.647943 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/76546fe8-0dad-45f2-aac1-2ec02ec40898-host\") pod \"node-ca-ct8b7\" (UID: \"76546fe8-0dad-45f2-aac1-2ec02ec40898\") " pod="openshift-image-registry/node-ca-ct8b7" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.648021 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/76546fe8-0dad-45f2-aac1-2ec02ec40898-serviceca\") pod \"node-ca-ct8b7\" (UID: \"76546fe8-0dad-45f2-aac1-2ec02ec40898\") " pod="openshift-image-registry/node-ca-ct8b7" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.648043 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggp5r\" (UniqueName: \"kubernetes.io/projected/76546fe8-0dad-45f2-aac1-2ec02ec40898-kube-api-access-ggp5r\") pod \"node-ca-ct8b7\" (UID: \"76546fe8-0dad-45f2-aac1-2ec02ec40898\") " pod="openshift-image-registry/node-ca-ct8b7" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.652301 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.656649 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52b7595-c2f6-4d7b-a076-db28e0332d49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d64509908882d8a14b1836f1c667e92c12f407f5108009c8a63dd841d034afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae002ad3b0e97ea0ba6f6c7a1bee332c5a1644dabbe1af5ccf1b9325356c77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe23a7d9f061976dbd948d705fa9d73bd8a5f9f01304c4d966fcc4501d84cac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.670343 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.670573 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.679497 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.680455 4858 scope.go:117] "RemoveContainer" containerID="44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.680669 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.685798 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.688796 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-6hxtm" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.698541 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.709219 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:20 crc kubenswrapper[4858]: W0202 17:15:20.719218 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f96e711_13fa_4105_b042_45fe046d3d35.slice/crio-c7af3003213e7a540befa5c111fb9009186f85078842c4d38e48030fd2afa19c WatchSource:0}: Error finding container c7af3003213e7a540befa5c111fb9009186f85078842c4d38e48030fd2afa19c: Status 404 returned error can't find the container with id c7af3003213e7a540befa5c111fb9009186f85078842c4d38e48030fd2afa19c Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.719585 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.733957 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.748840 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/76546fe8-0dad-45f2-aac1-2ec02ec40898-serviceca\") pod \"node-ca-ct8b7\" (UID: \"76546fe8-0dad-45f2-aac1-2ec02ec40898\") " pod="openshift-image-registry/node-ca-ct8b7" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.748886 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggp5r\" (UniqueName: \"kubernetes.io/projected/76546fe8-0dad-45f2-aac1-2ec02ec40898-kube-api-access-ggp5r\") pod \"node-ca-ct8b7\" (UID: \"76546fe8-0dad-45f2-aac1-2ec02ec40898\") " pod="openshift-image-registry/node-ca-ct8b7" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.748931 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/76546fe8-0dad-45f2-aac1-2ec02ec40898-host\") pod \"node-ca-ct8b7\" (UID: \"76546fe8-0dad-45f2-aac1-2ec02ec40898\") " pod="openshift-image-registry/node-ca-ct8b7" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.749061 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/76546fe8-0dad-45f2-aac1-2ec02ec40898-host\") pod \"node-ca-ct8b7\" (UID: \"76546fe8-0dad-45f2-aac1-2ec02ec40898\") " pod="openshift-image-registry/node-ca-ct8b7" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.751383 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/76546fe8-0dad-45f2-aac1-2ec02ec40898-serviceca\") pod \"node-ca-ct8b7\" (UID: \"76546fe8-0dad-45f2-aac1-2ec02ec40898\") " pod="openshift-image-registry/node-ca-ct8b7" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.754636 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6hxtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f96e711-13fa-4105-b042-45fe046d3d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlbtc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6hxtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.761570 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ct8b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76546fe8-0dad-45f2-aac1-2ec02ec40898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggp5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ct8b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.767921 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggp5r\" (UniqueName: \"kubernetes.io/projected/76546fe8-0dad-45f2-aac1-2ec02ec40898-kube-api-access-ggp5r\") pod \"node-ca-ct8b7\" (UID: \"76546fe8-0dad-45f2-aac1-2ec02ec40898\") " pod="openshift-image-registry/node-ca-ct8b7" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.852921 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-9szlc"] Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.853323 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9szlc" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.856125 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.856342 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.856637 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.858885 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.858935 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.868546 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc5a816-d9d0-41c0-877c-250e077ef445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 17:15:14.099362 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 17:15:14.102453 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2083573188/tls.crt::/tmp/serving-cert-2083573188/tls.key\\\\\\\"\\\\nI0202 17:15:19.591422 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 17:15:19.597363 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 17:15:19.597387 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 17:15:19.597410 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 17:15:19.597416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 17:15:19.647613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 17:15:19.647820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 17:15:19.647642 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 17:15:19.647909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 17:15:19.648000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 17:15:19.648027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 17:15:19.650075 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.875500 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-ct8b7" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.889965 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52b7595-c2f6-4d7b-a076-db28e0332d49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d64509908882d8a14b1836f1c667e92c12f407f5108009c8a63dd841d034afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae002ad3b0e97ea0ba6f6c7a1bee332c5a1644dabbe1af5ccf1b9325356c77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe23a7d9f061976dbd948d705fa9d73bd8a5f9f01304c4d966fcc4501d84cac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.908272 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.920664 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.932987 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.944057 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.953304 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.953396 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4bc7963e-1bdc-4038-805e-bd72fc217a13-host-var-lib-cni-bin\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.953419 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4bc7963e-1bdc-4038-805e-bd72fc217a13-host-var-lib-kubelet\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.953435 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4bc7963e-1bdc-4038-805e-bd72fc217a13-hostroot\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.953464 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.953486 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4bc7963e-1bdc-4038-805e-bd72fc217a13-multus-cni-dir\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.953507 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.953525 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4bc7963e-1bdc-4038-805e-bd72fc217a13-host-run-k8s-cni-cncf-io\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.953554 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4bc7963e-1bdc-4038-805e-bd72fc217a13-system-cni-dir\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.953569 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4bc7963e-1bdc-4038-805e-bd72fc217a13-host-run-netns\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.953585 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bc7963e-1bdc-4038-805e-bd72fc217a13-multus-daemon-config\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.953601 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4bc7963e-1bdc-4038-805e-bd72fc217a13-os-release\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.953618 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.953634 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4bc7963e-1bdc-4038-805e-bd72fc217a13-cnibin\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.953652 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4bc7963e-1bdc-4038-805e-bd72fc217a13-multus-socket-dir-parent\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.953666 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4bc7963e-1bdc-4038-805e-bd72fc217a13-multus-conf-dir\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.953683 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bc7963e-1bdc-4038-805e-bd72fc217a13-cni-binary-copy\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.953700 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4bc7963e-1bdc-4038-805e-bd72fc217a13-host-var-lib-cni-multus\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.953716 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4bc7963e-1bdc-4038-805e-bd72fc217a13-etc-kubernetes\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.953730 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zzdm\" (UniqueName: \"kubernetes.io/projected/4bc7963e-1bdc-4038-805e-bd72fc217a13-kube-api-access-6zzdm\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.953744 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4bc7963e-1bdc-4038-805e-bd72fc217a13-host-run-multus-certs\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:20 crc kubenswrapper[4858]: E0202 17:15:20.953829 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:15:21.953814476 +0000 UTC m=+23.106229741 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:15:20 crc kubenswrapper[4858]: E0202 17:15:20.953925 4858 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 17:15:20 crc kubenswrapper[4858]: E0202 17:15:20.953955 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 17:15:21.95394905 +0000 UTC m=+23.106364315 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 17:15:20 crc kubenswrapper[4858]: E0202 17:15:20.954289 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 17:15:20 crc kubenswrapper[4858]: E0202 17:15:20.954301 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 17:15:20 crc kubenswrapper[4858]: E0202 17:15:20.954311 4858 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 17:15:20 crc kubenswrapper[4858]: E0202 17:15:20.954333 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 17:15:21.9543263 +0000 UTC m=+23.106741565 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 17:15:20 crc kubenswrapper[4858]: E0202 17:15:20.954375 4858 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 17:15:20 crc kubenswrapper[4858]: E0202 17:15:20.954395 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 17:15:21.954389522 +0000 UTC m=+23.106804787 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.956385 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6hxtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f96e711-13fa-4105-b042-45fe046d3d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlbtc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6hxtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.966222 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ct8b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76546fe8-0dad-45f2-aac1-2ec02ec40898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggp5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ct8b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:20 crc kubenswrapper[4858]: I0202 17:15:20.985732 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9szlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc7963e-1bdc-4038-805e-bd72fc217a13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9szlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.009896 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.023676 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.054130 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4bc7963e-1bdc-4038-805e-bd72fc217a13-cnibin\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.054184 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4bc7963e-1bdc-4038-805e-bd72fc217a13-multus-socket-dir-parent\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.054202 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4bc7963e-1bdc-4038-805e-bd72fc217a13-multus-conf-dir\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.054221 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4bc7963e-1bdc-4038-805e-bd72fc217a13-host-var-lib-cni-multus\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.054239 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bc7963e-1bdc-4038-805e-bd72fc217a13-cni-binary-copy\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.054245 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4bc7963e-1bdc-4038-805e-bd72fc217a13-cnibin\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.054306 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4bc7963e-1bdc-4038-805e-bd72fc217a13-etc-kubernetes\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.054256 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4bc7963e-1bdc-4038-805e-bd72fc217a13-etc-kubernetes\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.054342 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zzdm\" (UniqueName: \"kubernetes.io/projected/4bc7963e-1bdc-4038-805e-bd72fc217a13-kube-api-access-6zzdm\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.054311 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4bc7963e-1bdc-4038-805e-bd72fc217a13-multus-socket-dir-parent\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.054383 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4bc7963e-1bdc-4038-805e-bd72fc217a13-host-run-multus-certs\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.054365 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4bc7963e-1bdc-4038-805e-bd72fc217a13-host-run-multus-certs\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.054338 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4bc7963e-1bdc-4038-805e-bd72fc217a13-multus-conf-dir\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.054451 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4bc7963e-1bdc-4038-805e-bd72fc217a13-host-var-lib-cni-multus\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.054521 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4bc7963e-1bdc-4038-805e-bd72fc217a13-host-var-lib-cni-bin\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.054578 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4bc7963e-1bdc-4038-805e-bd72fc217a13-host-var-lib-cni-bin\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.054630 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4bc7963e-1bdc-4038-805e-bd72fc217a13-host-var-lib-kubelet\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.054667 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4bc7963e-1bdc-4038-805e-bd72fc217a13-hostroot\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.054689 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4bc7963e-1bdc-4038-805e-bd72fc217a13-hostroot\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.054694 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4bc7963e-1bdc-4038-805e-bd72fc217a13-multus-cni-dir\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.054731 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4bc7963e-1bdc-4038-805e-bd72fc217a13-host-run-k8s-cni-cncf-io\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.054667 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4bc7963e-1bdc-4038-805e-bd72fc217a13-host-var-lib-kubelet\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.054756 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4bc7963e-1bdc-4038-805e-bd72fc217a13-multus-cni-dir\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.054769 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.054792 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4bc7963e-1bdc-4038-805e-bd72fc217a13-system-cni-dir\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.054806 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4bc7963e-1bdc-4038-805e-bd72fc217a13-host-run-k8s-cni-cncf-io\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.054812 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4bc7963e-1bdc-4038-805e-bd72fc217a13-os-release\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:21 crc kubenswrapper[4858]: E0202 17:15:21.054876 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.054885 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bc7963e-1bdc-4038-805e-bd72fc217a13-cni-binary-copy\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:21 crc kubenswrapper[4858]: E0202 17:15:21.054894 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 17:15:21 crc kubenswrapper[4858]: E0202 17:15:21.054907 4858 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.054887 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4bc7963e-1bdc-4038-805e-bd72fc217a13-host-run-netns\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.054912 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4bc7963e-1bdc-4038-805e-bd72fc217a13-host-run-netns\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.054934 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bc7963e-1bdc-4038-805e-bd72fc217a13-multus-daemon-config\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.054912 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4bc7963e-1bdc-4038-805e-bd72fc217a13-system-cni-dir\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.054961 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4bc7963e-1bdc-4038-805e-bd72fc217a13-os-release\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:21 crc kubenswrapper[4858]: E0202 17:15:21.054962 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 17:15:22.054945566 +0000 UTC m=+23.207360831 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.055859 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bc7963e-1bdc-4038-805e-bd72fc217a13-multus-daemon-config\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.072710 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zzdm\" (UniqueName: \"kubernetes.io/projected/4bc7963e-1bdc-4038-805e-bd72fc217a13-kube-api-access-6zzdm\") pod \"multus-9szlc\" (UID: \"4bc7963e-1bdc-4038-805e-bd72fc217a13\") " pod="openshift-multus/multus-9szlc" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.201229 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9szlc" Feb 02 17:15:21 crc kubenswrapper[4858]: W0202 17:15:21.216366 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bc7963e_1bdc_4038_805e_bd72fc217a13.slice/crio-4ddef4787844bc3c7d95235a46f4d11ad8d46a2ec06dc90f8bfadb57603ea704 WatchSource:0}: Error finding container 4ddef4787844bc3c7d95235a46f4d11ad8d46a2ec06dc90f8bfadb57603ea704: Status 404 returned error can't find the container with id 4ddef4787844bc3c7d95235a46f4d11ad8d46a2ec06dc90f8bfadb57603ea704 Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.220627 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-lbvl2"] Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.221074 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wkm4w"] Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.221267 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.222563 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.224170 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-6nv4v"] Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.224926 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.225284 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.225489 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.225554 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.225638 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.225910 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.225936 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.228760 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.229221 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.229253 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.229527 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.229747 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.229941 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.230075 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.234435 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.250324 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.257021 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-systemd-units\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.257056 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csh5j\" (UniqueName: \"kubernetes.io/projected/ce405d19-c944-4a11-8195-bca9289b8d73-kube-api-access-csh5j\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.257074 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/341ca71a-aaf0-403c-8ecd-bbf2a70b031b-system-cni-dir\") pod \"multus-additional-cni-plugins-6nv4v\" (UID: \"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\") " pod="openshift-multus/multus-additional-cni-plugins-6nv4v" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.257107 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-host-cni-netd\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.257123 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ce405d19-c944-4a11-8195-bca9289b8d73-env-overrides\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.257138 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d03a4872-ca6a-4233-bdbf-b31f7890dc3e-rootfs\") pod \"machine-config-daemon-lbvl2\" (UID: \"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\") " pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.257153 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-run-ovn\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.257172 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/341ca71a-aaf0-403c-8ecd-bbf2a70b031b-cnibin\") pod \"multus-additional-cni-plugins-6nv4v\" (UID: \"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\") " pod="openshift-multus/multus-additional-cni-plugins-6nv4v" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.257234 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-run-systemd\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.257296 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-etc-openvswitch\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.257322 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-node-log\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.257349 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ce405d19-c944-4a11-8195-bca9289b8d73-ovn-node-metrics-cert\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.257466 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d03a4872-ca6a-4233-bdbf-b31f7890dc3e-proxy-tls\") pod \"machine-config-daemon-lbvl2\" (UID: \"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\") " pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.257494 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-host-slash\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.257520 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ce405d19-c944-4a11-8195-bca9289b8d73-ovnkube-script-lib\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.257548 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plt28\" (UniqueName: \"kubernetes.io/projected/341ca71a-aaf0-403c-8ecd-bbf2a70b031b-kube-api-access-plt28\") pod \"multus-additional-cni-plugins-6nv4v\" (UID: \"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\") " pod="openshift-multus/multus-additional-cni-plugins-6nv4v" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.257574 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-log-socket\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.257712 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-run-openvswitch\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.257760 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/341ca71a-aaf0-403c-8ecd-bbf2a70b031b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6nv4v\" (UID: \"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\") " pod="openshift-multus/multus-additional-cni-plugins-6nv4v" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.257800 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ce405d19-c944-4a11-8195-bca9289b8d73-ovnkube-config\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.257820 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/341ca71a-aaf0-403c-8ecd-bbf2a70b031b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6nv4v\" (UID: \"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\") " pod="openshift-multus/multus-additional-cni-plugins-6nv4v" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.257854 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-var-lib-openvswitch\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.257893 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.257921 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d03a4872-ca6a-4233-bdbf-b31f7890dc3e-mcd-auth-proxy-config\") pod \"machine-config-daemon-lbvl2\" (UID: \"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\") " pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.257941 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-host-run-ovn-kubernetes\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.257959 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-host-cni-bin\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.258001 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/341ca71a-aaf0-403c-8ecd-bbf2a70b031b-cni-binary-copy\") pod \"multus-additional-cni-plugins-6nv4v\" (UID: \"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\") " pod="openshift-multus/multus-additional-cni-plugins-6nv4v" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.258028 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fblg5\" (UniqueName: \"kubernetes.io/projected/d03a4872-ca6a-4233-bdbf-b31f7890dc3e-kube-api-access-fblg5\") pod \"machine-config-daemon-lbvl2\" (UID: \"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\") " pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.258049 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-host-run-netns\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.258069 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/341ca71a-aaf0-403c-8ecd-bbf2a70b031b-os-release\") pod \"multus-additional-cni-plugins-6nv4v\" (UID: \"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\") " pod="openshift-multus/multus-additional-cni-plugins-6nv4v" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.258105 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-host-kubelet\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.263932 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.274331 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6hxtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f96e711-13fa-4105-b042-45fe046d3d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlbtc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6hxtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.292883 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ct8b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76546fe8-0dad-45f2-aac1-2ec02ec40898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggp5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ct8b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.305295 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9szlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc7963e-1bdc-4038-805e-bd72fc217a13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9szlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.320626 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lbvl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.338715 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.348627 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 06:04:08.642824431 +0000 UTC Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.352858 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.359166 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d03a4872-ca6a-4233-bdbf-b31f7890dc3e-mcd-auth-proxy-config\") pod \"machine-config-daemon-lbvl2\" (UID: \"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\") " pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.359212 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-host-run-ovn-kubernetes\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.359233 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-host-cni-bin\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.359256 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fblg5\" (UniqueName: \"kubernetes.io/projected/d03a4872-ca6a-4233-bdbf-b31f7890dc3e-kube-api-access-fblg5\") pod \"machine-config-daemon-lbvl2\" (UID: \"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\") " pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.359278 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-host-run-netns\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.359299 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/341ca71a-aaf0-403c-8ecd-bbf2a70b031b-os-release\") pod \"multus-additional-cni-plugins-6nv4v\" (UID: \"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\") " pod="openshift-multus/multus-additional-cni-plugins-6nv4v" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.359317 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/341ca71a-aaf0-403c-8ecd-bbf2a70b031b-cni-binary-copy\") pod \"multus-additional-cni-plugins-6nv4v\" (UID: \"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\") " pod="openshift-multus/multus-additional-cni-plugins-6nv4v" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.359339 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-host-kubelet\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.359338 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-host-cni-bin\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.359373 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csh5j\" (UniqueName: \"kubernetes.io/projected/ce405d19-c944-4a11-8195-bca9289b8d73-kube-api-access-csh5j\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.359386 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-host-run-netns\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.359440 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-host-kubelet\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.359401 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/341ca71a-aaf0-403c-8ecd-bbf2a70b031b-system-cni-dir\") pod \"multus-additional-cni-plugins-6nv4v\" (UID: \"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\") " pod="openshift-multus/multus-additional-cni-plugins-6nv4v" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.359458 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/341ca71a-aaf0-403c-8ecd-bbf2a70b031b-system-cni-dir\") pod \"multus-additional-cni-plugins-6nv4v\" (UID: \"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\") " pod="openshift-multus/multus-additional-cni-plugins-6nv4v" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.359464 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/341ca71a-aaf0-403c-8ecd-bbf2a70b031b-os-release\") pod \"multus-additional-cni-plugins-6nv4v\" (UID: \"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\") " pod="openshift-multus/multus-additional-cni-plugins-6nv4v" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.359490 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-host-run-ovn-kubernetes\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.359579 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-systemd-units\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.359635 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-host-cni-netd\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.359660 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ce405d19-c944-4a11-8195-bca9289b8d73-env-overrides\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.359707 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-systemd-units\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.359755 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d03a4872-ca6a-4233-bdbf-b31f7890dc3e-rootfs\") pod \"machine-config-daemon-lbvl2\" (UID: \"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\") " pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.359772 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-run-ovn\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.359788 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/341ca71a-aaf0-403c-8ecd-bbf2a70b031b-cnibin\") pod \"multus-additional-cni-plugins-6nv4v\" (UID: \"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\") " pod="openshift-multus/multus-additional-cni-plugins-6nv4v" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.359803 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-run-systemd\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.359818 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-etc-openvswitch\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.359833 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-node-log\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.359839 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d03a4872-ca6a-4233-bdbf-b31f7890dc3e-rootfs\") pod \"machine-config-daemon-lbvl2\" (UID: \"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\") " pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.359848 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ce405d19-c944-4a11-8195-bca9289b8d73-ovn-node-metrics-cert\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.359867 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-host-cni-netd\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.359897 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-run-systemd\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.359922 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-etc-openvswitch\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.359922 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d03a4872-ca6a-4233-bdbf-b31f7890dc3e-proxy-tls\") pod \"machine-config-daemon-lbvl2\" (UID: \"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\") " pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.359943 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-node-log\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.359950 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-host-slash\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.359963 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-run-ovn\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.360013 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/341ca71a-aaf0-403c-8ecd-bbf2a70b031b-cnibin\") pod \"multus-additional-cni-plugins-6nv4v\" (UID: \"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\") " pod="openshift-multus/multus-additional-cni-plugins-6nv4v" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.360198 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/341ca71a-aaf0-403c-8ecd-bbf2a70b031b-cni-binary-copy\") pod \"multus-additional-cni-plugins-6nv4v\" (UID: \"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\") " pod="openshift-multus/multus-additional-cni-plugins-6nv4v" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.360266 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d03a4872-ca6a-4233-bdbf-b31f7890dc3e-mcd-auth-proxy-config\") pod \"machine-config-daemon-lbvl2\" (UID: \"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\") " pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.360296 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-host-slash\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.360318 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ce405d19-c944-4a11-8195-bca9289b8d73-env-overrides\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.360344 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ce405d19-c944-4a11-8195-bca9289b8d73-ovnkube-script-lib\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.360374 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plt28\" (UniqueName: \"kubernetes.io/projected/341ca71a-aaf0-403c-8ecd-bbf2a70b031b-kube-api-access-plt28\") pod \"multus-additional-cni-plugins-6nv4v\" (UID: \"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\") " pod="openshift-multus/multus-additional-cni-plugins-6nv4v" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.360392 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-log-socket\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.360419 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-run-openvswitch\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.360435 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/341ca71a-aaf0-403c-8ecd-bbf2a70b031b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6nv4v\" (UID: \"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\") " pod="openshift-multus/multus-additional-cni-plugins-6nv4v" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.360452 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ce405d19-c944-4a11-8195-bca9289b8d73-ovnkube-config\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.360467 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/341ca71a-aaf0-403c-8ecd-bbf2a70b031b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6nv4v\" (UID: \"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\") " pod="openshift-multus/multus-additional-cni-plugins-6nv4v" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.360504 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-var-lib-openvswitch\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.360524 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.360573 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.361069 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ce405d19-c944-4a11-8195-bca9289b8d73-ovnkube-script-lib\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.361332 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-log-socket\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.361363 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-run-openvswitch\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.361553 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/341ca71a-aaf0-403c-8ecd-bbf2a70b031b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6nv4v\" (UID: \"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\") " pod="openshift-multus/multus-additional-cni-plugins-6nv4v" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.361746 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/341ca71a-aaf0-403c-8ecd-bbf2a70b031b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6nv4v\" (UID: \"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\") " pod="openshift-multus/multus-additional-cni-plugins-6nv4v" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.361785 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-var-lib-openvswitch\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.362111 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ce405d19-c944-4a11-8195-bca9289b8d73-ovnkube-config\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.362626 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ce405d19-c944-4a11-8195-bca9289b8d73-ovn-node-metrics-cert\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.364134 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d03a4872-ca6a-4233-bdbf-b31f7890dc3e-proxy-tls\") pod \"machine-config-daemon-lbvl2\" (UID: \"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\") " pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.367983 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc5a816-d9d0-41c0-877c-250e077ef445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 17:15:14.099362 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 17:15:14.102453 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2083573188/tls.crt::/tmp/serving-cert-2083573188/tls.key\\\\\\\"\\\\nI0202 17:15:19.591422 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 17:15:19.597363 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 17:15:19.597387 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 17:15:19.597410 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 17:15:19.597416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 17:15:19.647613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 17:15:19.647820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 17:15:19.647642 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 17:15:19.647909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 17:15:19.648000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 17:15:19.648027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 17:15:19.650075 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.377919 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fblg5\" (UniqueName: \"kubernetes.io/projected/d03a4872-ca6a-4233-bdbf-b31f7890dc3e-kube-api-access-fblg5\") pod \"machine-config-daemon-lbvl2\" (UID: \"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\") " pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.378414 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.379619 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plt28\" (UniqueName: \"kubernetes.io/projected/341ca71a-aaf0-403c-8ecd-bbf2a70b031b-kube-api-access-plt28\") pod \"multus-additional-cni-plugins-6nv4v\" (UID: \"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\") " pod="openshift-multus/multus-additional-cni-plugins-6nv4v" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.384570 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csh5j\" (UniqueName: \"kubernetes.io/projected/ce405d19-c944-4a11-8195-bca9289b8d73-kube-api-access-csh5j\") pod \"ovnkube-node-wkm4w\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.392488 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.405576 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52b7595-c2f6-4d7b-a076-db28e0332d49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d64509908882d8a14b1836f1c667e92c12f407f5108009c8a63dd841d034afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae002ad3b0e97ea0ba6f6c7a1bee332c5a1644dabbe1af5ccf1b9325356c77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe23a7d9f061976dbd948d705fa9d73bd8a5f9f01304c4d966fcc4501d84cac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.421525 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc5a816-d9d0-41c0-877c-250e077ef445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 17:15:14.099362 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 17:15:14.102453 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2083573188/tls.crt::/tmp/serving-cert-2083573188/tls.key\\\\\\\"\\\\nI0202 17:15:19.591422 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 17:15:19.597363 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 17:15:19.597387 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 17:15:19.597410 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 17:15:19.597416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 17:15:19.647613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 17:15:19.647820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 17:15:19.647642 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 17:15:19.647909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 17:15:19.648000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 17:15:19.648027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 17:15:19.650075 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.434184 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.474520 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52b7595-c2f6-4d7b-a076-db28e0332d49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d64509908882d8a14b1836f1c667e92c12f407f5108009c8a63dd841d034afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae002ad3b0e97ea0ba6f6c7a1bee332c5a1644dabbe1af5ccf1b9325356c77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe23a7d9f061976dbd948d705fa9d73bd8a5f9f01304c4d966fcc4501d84cac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.508671 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.523007 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9szlc" event={"ID":"4bc7963e-1bdc-4038-805e-bd72fc217a13","Type":"ContainerStarted","Data":"a59b9be9d88ddb7f0a4b7c29467c616216c8b14502941cc1d6e43a922d903567"} Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.523047 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9szlc" event={"ID":"4bc7963e-1bdc-4038-805e-bd72fc217a13","Type":"ContainerStarted","Data":"4ddef4787844bc3c7d95235a46f4d11ad8d46a2ec06dc90f8bfadb57603ea704"} Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.524741 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-ct8b7" event={"ID":"76546fe8-0dad-45f2-aac1-2ec02ec40898","Type":"ContainerStarted","Data":"ce0dc2e96af3e8efe18dbe2ac824f5305bfbc9fe047645fd495f9a6ca568ae1f"} Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.524783 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-ct8b7" event={"ID":"76546fe8-0dad-45f2-aac1-2ec02ec40898","Type":"ContainerStarted","Data":"8ac343e8ee8feec2419792957e219f7e78830d5eb73e1ae502a4ecd316abdcbb"} Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.525881 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-6hxtm" event={"ID":"9f96e711-13fa-4105-b042-45fe046d3d35","Type":"ContainerStarted","Data":"ca0363b669895519cf2438c9ea033dc0227b3f6fea175835586f9dc1252688c0"} Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.525904 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-6hxtm" event={"ID":"9f96e711-13fa-4105-b042-45fe046d3d35","Type":"ContainerStarted","Data":"c7af3003213e7a540befa5c111fb9009186f85078842c4d38e48030fd2afa19c"} Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.527271 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"586c9f180c983beb2187348f2e68b1a8e550805c0f8035b72785f9845b00a14c"} Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.527294 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"0629fa3631c6ec611ce9a9130532c9890bab8c4f4ca83042f9d4c8f1ec990690"} Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.527306 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6490662cdedbe3e6e839cc287f1f83ea9d6ef6d8ae2e8c04ca608a86576a8283"} Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.528466 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"dce5b4fa8ae1fbd371b3a75bf6a2b061749d1b158fbf24b60a0a6fbebeee5e81"} Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.528545 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"07b70535a58c13542c1276316a947fe213ea37b58fac46e40e2dca49661a8f91"} Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.529259 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"f9404bd2133404d7eff26388434e4ffa4a0d59da4e62b21708a66d1ec226bcd3"} Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.530749 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.532231 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e"} Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.537781 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.546696 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.549032 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.556045 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" Feb 02 17:15:21 crc kubenswrapper[4858]: W0202 17:15:21.557377 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce405d19_c944_4a11_8195_bca9289b8d73.slice/crio-e6388e015b4efe0b7d1369e0474426d8095e524d0f06ce875a94c2425b98a739 WatchSource:0}: Error finding container e6388e015b4efe0b7d1369e0474426d8095e524d0f06ce875a94c2425b98a739: Status 404 returned error can't find the container with id e6388e015b4efe0b7d1369e0474426d8095e524d0f06ce875a94c2425b98a739 Feb 02 17:15:21 crc kubenswrapper[4858]: W0202 17:15:21.574112 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod341ca71a_aaf0_403c_8ecd_bbf2a70b031b.slice/crio-12a501dbea2e6cdc690dd1b728286693be11ebbe53d5fd2293577c97cbf8bae0 WatchSource:0}: Error finding container 12a501dbea2e6cdc690dd1b728286693be11ebbe53d5fd2293577c97cbf8bae0: Status 404 returned error can't find the container with id 12a501dbea2e6cdc690dd1b728286693be11ebbe53d5fd2293577c97cbf8bae0 Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.588997 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lbvl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.637192 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce405d19-c944-4a11-8195-bca9289b8d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wkm4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.669187 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.710922 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.749869 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6hxtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f96e711-13fa-4105-b042-45fe046d3d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlbtc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6hxtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.785255 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ct8b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76546fe8-0dad-45f2-aac1-2ec02ec40898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggp5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ct8b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.829889 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9szlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc7963e-1bdc-4038-805e-bd72fc217a13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9szlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.876448 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.914614 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.949278 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52b7595-c2f6-4d7b-a076-db28e0332d49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d64509908882d8a14b1836f1c667e92c12f407f5108009c8a63dd841d034afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae002ad3b0e97ea0ba6f6c7a1bee332c5a1644dabbe1af5ccf1b9325356c77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe23a7d9f061976dbd948d705fa9d73bd8a5f9f01304c4d966fcc4501d84cac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.965198 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.965312 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:15:21 crc kubenswrapper[4858]: E0202 17:15:21.965335 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:15:23.965309946 +0000 UTC m=+25.117725221 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.965385 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:15:21 crc kubenswrapper[4858]: E0202 17:15:21.965420 4858 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.965522 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:15:21 crc kubenswrapper[4858]: E0202 17:15:21.965556 4858 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 17:15:21 crc kubenswrapper[4858]: E0202 17:15:21.965585 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 17:15:23.965570903 +0000 UTC m=+25.117986238 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 17:15:21 crc kubenswrapper[4858]: E0202 17:15:21.965564 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 17:15:21 crc kubenswrapper[4858]: E0202 17:15:21.965607 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 17:15:23.965597294 +0000 UTC m=+25.118012659 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 17:15:21 crc kubenswrapper[4858]: E0202 17:15:21.965624 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 17:15:21 crc kubenswrapper[4858]: E0202 17:15:21.965638 4858 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 17:15:21 crc kubenswrapper[4858]: E0202 17:15:21.965684 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 17:15:23.965667455 +0000 UTC m=+25.118082810 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 17:15:21 crc kubenswrapper[4858]: I0202 17:15:21.987310 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.029586 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.066098 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:15:22 crc kubenswrapper[4858]: E0202 17:15:22.066298 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 17:15:22 crc kubenswrapper[4858]: E0202 17:15:22.066324 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 17:15:22 crc kubenswrapper[4858]: E0202 17:15:22.066341 4858 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 17:15:22 crc kubenswrapper[4858]: E0202 17:15:22.066410 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 17:15:24.066388744 +0000 UTC m=+25.218804029 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.068888 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://586c9f180c983beb2187348f2e68b1a8e550805c0f8035b72785f9845b00a14c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0629fa3631c6ec611ce9a9130532c9890bab8c4f4ca83042f9d4c8f1ec990690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.119174 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.147096 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6hxtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f96e711-13fa-4105-b042-45fe046d3d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca0363b669895519cf2438c9ea033dc0227b3f6fea175835586f9dc1252688c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlbtc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6hxtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.193484 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ct8b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76546fe8-0dad-45f2-aac1-2ec02ec40898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0dc2e96af3e8efe18dbe2ac824f5305bfbc9fe047645fd495f9a6ca568ae1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggp5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ct8b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.230387 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9szlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc7963e-1bdc-4038-805e-bd72fc217a13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59b9be9d88ddb7f0a4b7c29467c616216c8b14502941cc1d6e43a922d903567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9szlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.269200 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lbvl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.318700 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce405d19-c944-4a11-8195-bca9289b8d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wkm4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:22Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.349563 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 01:12:21.917774879 +0000 UTC Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.350580 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce5b4fa8ae1fbd371b3a75bf6a2b061749d1b158fbf24b60a0a6fbebeee5e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:22Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.390745 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:22Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.400078 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:15:22 crc kubenswrapper[4858]: E0202 17:15:22.400262 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.400078 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.400458 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:15:22 crc kubenswrapper[4858]: E0202 17:15:22.400474 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:15:22 crc kubenswrapper[4858]: E0202 17:15:22.400778 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.404303 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.405264 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.406175 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.406999 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.407764 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.408490 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.409302 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.411383 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.412362 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.413508 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.414323 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.415307 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.415867 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.416388 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.417786 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.418466 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.431695 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc5a816-d9d0-41c0-877c-250e077ef445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 17:15:14.099362 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 17:15:14.102453 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2083573188/tls.crt::/tmp/serving-cert-2083573188/tls.key\\\\\\\"\\\\nI0202 17:15:19.591422 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 17:15:19.597363 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 17:15:19.597387 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 17:15:19.597410 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 17:15:19.597416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 17:15:19.647613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 17:15:19.647820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 17:15:19.647642 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 17:15:19.647909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 17:15:19.648000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 17:15:19.648027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 17:15:19.650075 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:22Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.477054 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:22Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.538041 4858 generic.go:334] "Generic (PLEG): container finished" podID="ce405d19-c944-4a11-8195-bca9289b8d73" containerID="d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4" exitCode=0 Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.538136 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" event={"ID":"ce405d19-c944-4a11-8195-bca9289b8d73","Type":"ContainerDied","Data":"d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4"} Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.538446 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" event={"ID":"ce405d19-c944-4a11-8195-bca9289b8d73","Type":"ContainerStarted","Data":"e6388e015b4efe0b7d1369e0474426d8095e524d0f06ce875a94c2425b98a739"} Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.540226 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" event={"ID":"d03a4872-ca6a-4233-bdbf-b31f7890dc3e","Type":"ContainerStarted","Data":"80911936318b3a9de14a9f71a7437f0ee6c9fdcb56c2f4efc568a5543a6c86e5"} Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.540282 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" event={"ID":"d03a4872-ca6a-4233-bdbf-b31f7890dc3e","Type":"ContainerStarted","Data":"a3060eb52c9c081c561e2b97588ea0418eadc0e227307715ef71d48fb42bbf57"} Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.540294 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" event={"ID":"d03a4872-ca6a-4233-bdbf-b31f7890dc3e","Type":"ContainerStarted","Data":"920a50f570209d1847b0528e9a837c396f9d4b1ad28394dbe850a89fe6db32f8"} Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.542130 4858 generic.go:334] "Generic (PLEG): container finished" podID="341ca71a-aaf0-403c-8ecd-bbf2a70b031b" containerID="f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c" exitCode=0 Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.542224 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" event={"ID":"341ca71a-aaf0-403c-8ecd-bbf2a70b031b","Type":"ContainerDied","Data":"f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c"} Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.542261 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" event={"ID":"341ca71a-aaf0-403c-8ecd-bbf2a70b031b","Type":"ContainerStarted","Data":"12a501dbea2e6cdc690dd1b728286693be11ebbe53d5fd2293577c97cbf8bae0"} Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.542704 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.564015 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce405d19-c944-4a11-8195-bca9289b8d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wkm4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:22Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.579784 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://586c9f180c983beb2187348f2e68b1a8e550805c0f8035b72785f9845b00a14c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0629fa3631c6ec611ce9a9130532c9890bab8c4f4ca83042f9d4c8f1ec990690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:22Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.593350 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:22Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.628561 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6hxtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f96e711-13fa-4105-b042-45fe046d3d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca0363b669895519cf2438c9ea033dc0227b3f6fea175835586f9dc1252688c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlbtc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6hxtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:22Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.671900 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ct8b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76546fe8-0dad-45f2-aac1-2ec02ec40898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0dc2e96af3e8efe18dbe2ac824f5305bfbc9fe047645fd495f9a6ca568ae1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggp5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ct8b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:22Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.712952 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9szlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc7963e-1bdc-4038-805e-bd72fc217a13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59b9be9d88ddb7f0a4b7c29467c616216c8b14502941cc1d6e43a922d903567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9szlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:22Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.753739 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lbvl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:22Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.796511 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce5b4fa8ae1fbd371b3a75bf6a2b061749d1b158fbf24b60a0a6fbebeee5e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:22Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.832287 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:22Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.870886 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc5a816-d9d0-41c0-877c-250e077ef445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 17:15:14.099362 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 17:15:14.102453 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2083573188/tls.crt::/tmp/serving-cert-2083573188/tls.key\\\\\\\"\\\\nI0202 17:15:19.591422 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 17:15:19.597363 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 17:15:19.597387 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 17:15:19.597410 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 17:15:19.597416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 17:15:19.647613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 17:15:19.647820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 17:15:19.647642 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 17:15:19.647909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 17:15:19.648000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 17:15:19.648027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 17:15:19.650075 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:22Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.911164 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:22Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.953999 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52b7595-c2f6-4d7b-a076-db28e0332d49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d64509908882d8a14b1836f1c667e92c12f407f5108009c8a63dd841d034afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae002ad3b0e97ea0ba6f6c7a1bee332c5a1644dabbe1af5ccf1b9325356c77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe23a7d9f061976dbd948d705fa9d73bd8a5f9f01304c4d966fcc4501d84cac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:22Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:22 crc kubenswrapper[4858]: I0202 17:15:22.989902 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:22Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:23 crc kubenswrapper[4858]: I0202 17:15:23.034607 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:23Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:23 crc kubenswrapper[4858]: I0202 17:15:23.071066 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:23Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:23 crc kubenswrapper[4858]: I0202 17:15:23.114748 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce5b4fa8ae1fbd371b3a75bf6a2b061749d1b158fbf24b60a0a6fbebeee5e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:23Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:23 crc kubenswrapper[4858]: I0202 17:15:23.203309 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:23Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:23 crc kubenswrapper[4858]: I0202 17:15:23.234052 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc5a816-d9d0-41c0-877c-250e077ef445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 17:15:14.099362 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 17:15:14.102453 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2083573188/tls.crt::/tmp/serving-cert-2083573188/tls.key\\\\\\\"\\\\nI0202 17:15:19.591422 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 17:15:19.597363 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 17:15:19.597387 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 17:15:19.597410 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 17:15:19.597416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 17:15:19.647613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 17:15:19.647820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 17:15:19.647642 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 17:15:19.647909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 17:15:19.648000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 17:15:19.648027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 17:15:19.650075 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:23Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:23 crc kubenswrapper[4858]: I0202 17:15:23.250546 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:23Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:23 crc kubenswrapper[4858]: I0202 17:15:23.270623 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52b7595-c2f6-4d7b-a076-db28e0332d49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d64509908882d8a14b1836f1c667e92c12f407f5108009c8a63dd841d034afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae002ad3b0e97ea0ba6f6c7a1bee332c5a1644dabbe1af5ccf1b9325356c77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe23a7d9f061976dbd948d705fa9d73bd8a5f9f01304c4d966fcc4501d84cac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:23Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:23 crc kubenswrapper[4858]: I0202 17:15:23.312527 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:23Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:23 crc kubenswrapper[4858]: I0202 17:15:23.350577 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 02:03:49.976263195 +0000 UTC Feb 02 17:15:23 crc kubenswrapper[4858]: I0202 17:15:23.354510 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 02 17:15:23 crc kubenswrapper[4858]: I0202 17:15:23.357322 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:23Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:23 crc kubenswrapper[4858]: I0202 17:15:23.372733 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 02 17:15:23 crc kubenswrapper[4858]: I0202 17:15:23.407908 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 02 17:15:23 crc kubenswrapper[4858]: I0202 17:15:23.413652 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6hxtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f96e711-13fa-4105-b042-45fe046d3d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca0363b669895519cf2438c9ea033dc0227b3f6fea175835586f9dc1252688c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlbtc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6hxtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:23Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:23 crc kubenswrapper[4858]: I0202 17:15:23.448081 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ct8b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76546fe8-0dad-45f2-aac1-2ec02ec40898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0dc2e96af3e8efe18dbe2ac824f5305bfbc9fe047645fd495f9a6ca568ae1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggp5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ct8b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:23Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:23 crc kubenswrapper[4858]: I0202 17:15:23.491351 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9szlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc7963e-1bdc-4038-805e-bd72fc217a13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59b9be9d88ddb7f0a4b7c29467c616216c8b14502941cc1d6e43a922d903567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9szlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:23Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:23 crc kubenswrapper[4858]: I0202 17:15:23.528159 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80911936318b3a9de14a9f71a7437f0ee6c9fdcb56c2f4efc568a5543a6c86e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3060eb52c9c081c561e2b97588ea0418eadc0e227307715ef71d48fb42bbf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lbvl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:23Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:23 crc kubenswrapper[4858]: I0202 17:15:23.548472 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" event={"ID":"ce405d19-c944-4a11-8195-bca9289b8d73","Type":"ContainerStarted","Data":"78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d"} Feb 02 17:15:23 crc kubenswrapper[4858]: I0202 17:15:23.548529 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" event={"ID":"ce405d19-c944-4a11-8195-bca9289b8d73","Type":"ContainerStarted","Data":"9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839"} Feb 02 17:15:23 crc kubenswrapper[4858]: I0202 17:15:23.548544 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" event={"ID":"ce405d19-c944-4a11-8195-bca9289b8d73","Type":"ContainerStarted","Data":"143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99"} Feb 02 17:15:23 crc kubenswrapper[4858]: I0202 17:15:23.548557 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" event={"ID":"ce405d19-c944-4a11-8195-bca9289b8d73","Type":"ContainerStarted","Data":"19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9"} Feb 02 17:15:23 crc kubenswrapper[4858]: I0202 17:15:23.548571 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" event={"ID":"ce405d19-c944-4a11-8195-bca9289b8d73","Type":"ContainerStarted","Data":"740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab"} Feb 02 17:15:23 crc kubenswrapper[4858]: I0202 17:15:23.548582 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" event={"ID":"ce405d19-c944-4a11-8195-bca9289b8d73","Type":"ContainerStarted","Data":"4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4"} Feb 02 17:15:23 crc kubenswrapper[4858]: I0202 17:15:23.550474 4858 generic.go:334] "Generic (PLEG): container finished" podID="341ca71a-aaf0-403c-8ecd-bbf2a70b031b" containerID="bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175" exitCode=0 Feb 02 17:15:23 crc kubenswrapper[4858]: I0202 17:15:23.550537 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" event={"ID":"341ca71a-aaf0-403c-8ecd-bbf2a70b031b","Type":"ContainerDied","Data":"bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175"} Feb 02 17:15:23 crc kubenswrapper[4858]: I0202 17:15:23.554279 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"3b070dee1026575fc13174bc4542b19d26a65f78951e524e035b6ce8be3d21e6"} Feb 02 17:15:23 crc kubenswrapper[4858]: I0202 17:15:23.582070 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce405d19-c944-4a11-8195-bca9289b8d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wkm4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:23Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:23 crc kubenswrapper[4858]: E0202 17:15:23.591591 4858 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Feb 02 17:15:23 crc kubenswrapper[4858]: I0202 17:15:23.631628 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://586c9f180c983beb2187348f2e68b1a8e550805c0f8035b72785f9845b00a14c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0629fa3631c6ec611ce9a9130532c9890bab8c4f4ca83042f9d4c8f1ec990690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:23Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:23 crc kubenswrapper[4858]: I0202 17:15:23.678941 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:23Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:23 crc kubenswrapper[4858]: I0202 17:15:23.712876 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:23Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:23 crc kubenswrapper[4858]: I0202 17:15:23.751062 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52b7595-c2f6-4d7b-a076-db28e0332d49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d64509908882d8a14b1836f1c667e92c12f407f5108009c8a63dd841d034afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae002ad3b0e97ea0ba6f6c7a1bee332c5a1644dabbe1af5ccf1b9325356c77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe23a7d9f061976dbd948d705fa9d73bd8a5f9f01304c4d966fcc4501d84cac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:23Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:23 crc kubenswrapper[4858]: I0202 17:15:23.790251 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://586c9f180c983beb2187348f2e68b1a8e550805c0f8035b72785f9845b00a14c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0629fa3631c6ec611ce9a9130532c9890bab8c4f4ca83042f9d4c8f1ec990690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:23Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:23 crc kubenswrapper[4858]: I0202 17:15:23.828455 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b070dee1026575fc13174bc4542b19d26a65f78951e524e035b6ce8be3d21e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:23Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:23 crc kubenswrapper[4858]: I0202 17:15:23.873683 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6hxtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f96e711-13fa-4105-b042-45fe046d3d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca0363b669895519cf2438c9ea033dc0227b3f6fea175835586f9dc1252688c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlbtc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6hxtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:23Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:23 crc kubenswrapper[4858]: I0202 17:15:23.920131 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ct8b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76546fe8-0dad-45f2-aac1-2ec02ec40898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0dc2e96af3e8efe18dbe2ac824f5305bfbc9fe047645fd495f9a6ca568ae1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggp5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ct8b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:23Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:23 crc kubenswrapper[4858]: I0202 17:15:23.957344 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9szlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc7963e-1bdc-4038-805e-bd72fc217a13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59b9be9d88ddb7f0a4b7c29467c616216c8b14502941cc1d6e43a922d903567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9szlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:23Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:23 crc kubenswrapper[4858]: I0202 17:15:23.987056 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:15:23 crc kubenswrapper[4858]: E0202 17:15:23.987291 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:15:27.987272659 +0000 UTC m=+29.139687934 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:15:23 crc kubenswrapper[4858]: I0202 17:15:23.987527 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:15:23 crc kubenswrapper[4858]: I0202 17:15:23.987814 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:15:23 crc kubenswrapper[4858]: I0202 17:15:23.988089 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:15:23 crc kubenswrapper[4858]: E0202 17:15:23.987757 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 17:15:23 crc kubenswrapper[4858]: E0202 17:15:23.988445 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 17:15:23 crc kubenswrapper[4858]: E0202 17:15:23.988566 4858 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 17:15:23 crc kubenswrapper[4858]: E0202 17:15:23.988716 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 17:15:27.988702978 +0000 UTC m=+29.141118253 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 17:15:23 crc kubenswrapper[4858]: E0202 17:15:23.988021 4858 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 17:15:23 crc kubenswrapper[4858]: E0202 17:15:23.989370 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 17:15:27.989355836 +0000 UTC m=+29.141771111 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 17:15:23 crc kubenswrapper[4858]: E0202 17:15:23.988271 4858 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 17:15:23 crc kubenswrapper[4858]: E0202 17:15:23.989651 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 17:15:27.989641044 +0000 UTC m=+29.142056319 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 17:15:23 crc kubenswrapper[4858]: I0202 17:15:23.999659 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80911936318b3a9de14a9f71a7437f0ee6c9fdcb56c2f4efc568a5543a6c86e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3060eb52c9c081c561e2b97588ea0418eadc0e227307715ef71d48fb42bbf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lbvl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:23Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:24 crc kubenswrapper[4858]: I0202 17:15:24.040023 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce405d19-c944-4a11-8195-bca9289b8d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wkm4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:24Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:24 crc kubenswrapper[4858]: I0202 17:15:24.075457 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6803a1fe-30ea-4c74-8a7f-743c3344d829\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925eeba4bab6bf2efe66b7e9bcc09451b9dd26cb3dc457fe4432b134ed68dfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://845157900bb10cd29ec008f561118df2cbfcc30684394dbc201ad8a966aa8e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e87a8afe124154c232c26f9a43969189b865225c0162799857847f10af1257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16072ce1c24706c267a0341779f05e661bac827ae871ecc732f81923bb63bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c692114482fe72631826c6af5eb14c5ef0fdfc5500ed0ea3fd96d6009d5be562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:24Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:24 crc kubenswrapper[4858]: I0202 17:15:24.089307 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:15:24 crc kubenswrapper[4858]: E0202 17:15:24.089447 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 17:15:24 crc kubenswrapper[4858]: E0202 17:15:24.089472 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 17:15:24 crc kubenswrapper[4858]: E0202 17:15:24.089485 4858 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 17:15:24 crc kubenswrapper[4858]: E0202 17:15:24.089538 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 17:15:28.089523549 +0000 UTC m=+29.241938824 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 17:15:24 crc kubenswrapper[4858]: I0202 17:15:24.112133 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce5b4fa8ae1fbd371b3a75bf6a2b061749d1b158fbf24b60a0a6fbebeee5e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:24Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:24 crc kubenswrapper[4858]: I0202 17:15:24.149885 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:24Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:24 crc kubenswrapper[4858]: I0202 17:15:24.191301 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc5a816-d9d0-41c0-877c-250e077ef445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 17:15:14.099362 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 17:15:14.102453 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2083573188/tls.crt::/tmp/serving-cert-2083573188/tls.key\\\\\\\"\\\\nI0202 17:15:19.591422 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 17:15:19.597363 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 17:15:19.597387 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 17:15:19.597410 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 17:15:19.597416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 17:15:19.647613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 17:15:19.647820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 17:15:19.647642 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 17:15:19.647909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 17:15:19.648000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 17:15:19.648027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 17:15:19.650075 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:24Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:24 crc kubenswrapper[4858]: I0202 17:15:24.231331 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:24Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:24 crc kubenswrapper[4858]: I0202 17:15:24.351467 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 15:06:06.370785915 +0000 UTC Feb 02 17:15:24 crc kubenswrapper[4858]: I0202 17:15:24.399865 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:15:24 crc kubenswrapper[4858]: I0202 17:15:24.400088 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:15:24 crc kubenswrapper[4858]: I0202 17:15:24.400157 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:15:24 crc kubenswrapper[4858]: E0202 17:15:24.400498 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:15:24 crc kubenswrapper[4858]: E0202 17:15:24.401219 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:15:24 crc kubenswrapper[4858]: E0202 17:15:24.401113 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:15:24 crc kubenswrapper[4858]: I0202 17:15:24.560595 4858 generic.go:334] "Generic (PLEG): container finished" podID="341ca71a-aaf0-403c-8ecd-bbf2a70b031b" containerID="2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114" exitCode=0 Feb 02 17:15:24 crc kubenswrapper[4858]: I0202 17:15:24.560687 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" event={"ID":"341ca71a-aaf0-403c-8ecd-bbf2a70b031b","Type":"ContainerDied","Data":"2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114"} Feb 02 17:15:24 crc kubenswrapper[4858]: I0202 17:15:24.597177 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:24Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:24 crc kubenswrapper[4858]: I0202 17:15:24.615252 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc5a816-d9d0-41c0-877c-250e077ef445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 17:15:14.099362 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 17:15:14.102453 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2083573188/tls.crt::/tmp/serving-cert-2083573188/tls.key\\\\\\\"\\\\nI0202 17:15:19.591422 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 17:15:19.597363 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 17:15:19.597387 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 17:15:19.597410 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 17:15:19.597416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 17:15:19.647613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 17:15:19.647820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 17:15:19.647642 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 17:15:19.647909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 17:15:19.648000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 17:15:19.648027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 17:15:19.650075 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:24Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:24 crc kubenswrapper[4858]: I0202 17:15:24.635031 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:24Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:24 crc kubenswrapper[4858]: I0202 17:15:24.654419 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52b7595-c2f6-4d7b-a076-db28e0332d49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d64509908882d8a14b1836f1c667e92c12f407f5108009c8a63dd841d034afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae002ad3b0e97ea0ba6f6c7a1bee332c5a1644dabbe1af5ccf1b9325356c77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe23a7d9f061976dbd948d705fa9d73bd8a5f9f01304c4d966fcc4501d84cac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:24Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:24 crc kubenswrapper[4858]: I0202 17:15:24.670458 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:24Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:24 crc kubenswrapper[4858]: I0202 17:15:24.682185 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b070dee1026575fc13174bc4542b19d26a65f78951e524e035b6ce8be3d21e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:24Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:24 crc kubenswrapper[4858]: I0202 17:15:24.718299 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6hxtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f96e711-13fa-4105-b042-45fe046d3d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca0363b669895519cf2438c9ea033dc0227b3f6fea175835586f9dc1252688c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlbtc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6hxtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:24Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:24 crc kubenswrapper[4858]: I0202 17:15:24.745338 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ct8b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76546fe8-0dad-45f2-aac1-2ec02ec40898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0dc2e96af3e8efe18dbe2ac824f5305bfbc9fe047645fd495f9a6ca568ae1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggp5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ct8b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:24Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:24 crc kubenswrapper[4858]: I0202 17:15:24.762807 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9szlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc7963e-1bdc-4038-805e-bd72fc217a13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59b9be9d88ddb7f0a4b7c29467c616216c8b14502941cc1d6e43a922d903567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9szlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:24Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:24 crc kubenswrapper[4858]: I0202 17:15:24.773725 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80911936318b3a9de14a9f71a7437f0ee6c9fdcb56c2f4efc568a5543a6c86e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3060eb52c9c081c561e2b97588ea0418eadc0e227307715ef71d48fb42bbf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lbvl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:24Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:24 crc kubenswrapper[4858]: I0202 17:15:24.790952 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce405d19-c944-4a11-8195-bca9289b8d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wkm4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:24Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:24 crc kubenswrapper[4858]: I0202 17:15:24.807513 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6803a1fe-30ea-4c74-8a7f-743c3344d829\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925eeba4bab6bf2efe66b7e9bcc09451b9dd26cb3dc457fe4432b134ed68dfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://845157900bb10cd29ec008f561118df2cbfcc30684394dbc201ad8a966aa8e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e87a8afe124154c232c26f9a43969189b865225c0162799857847f10af1257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16072ce1c24706c267a0341779f05e661bac827ae871ecc732f81923bb63bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c692114482fe72631826c6af5eb14c5ef0fdfc5500ed0ea3fd96d6009d5be562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:24Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:24 crc kubenswrapper[4858]: I0202 17:15:24.817716 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://586c9f180c983beb2187348f2e68b1a8e550805c0f8035b72785f9845b00a14c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0629fa3631c6ec611ce9a9130532c9890bab8c4f4ca83042f9d4c8f1ec990690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:24Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:24 crc kubenswrapper[4858]: I0202 17:15:24.828023 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:24Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:24 crc kubenswrapper[4858]: I0202 17:15:24.841440 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce5b4fa8ae1fbd371b3a75bf6a2b061749d1b158fbf24b60a0a6fbebeee5e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:24Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.352258 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 23:14:13.806618017 +0000 UTC Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.571733 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" event={"ID":"ce405d19-c944-4a11-8195-bca9289b8d73","Type":"ContainerStarted","Data":"e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad"} Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.578190 4858 generic.go:334] "Generic (PLEG): container finished" podID="341ca71a-aaf0-403c-8ecd-bbf2a70b031b" containerID="973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2" exitCode=0 Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.578252 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" event={"ID":"341ca71a-aaf0-403c-8ecd-bbf2a70b031b","Type":"ContainerDied","Data":"973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2"} Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.597418 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:25Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.620638 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc5a816-d9d0-41c0-877c-250e077ef445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 17:15:14.099362 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 17:15:14.102453 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2083573188/tls.crt::/tmp/serving-cert-2083573188/tls.key\\\\\\\"\\\\nI0202 17:15:19.591422 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 17:15:19.597363 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 17:15:19.597387 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 17:15:19.597410 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 17:15:19.597416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 17:15:19.647613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 17:15:19.647820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 17:15:19.647642 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 17:15:19.647909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 17:15:19.648000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 17:15:19.648027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 17:15:19.650075 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:25Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.641335 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:25Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.660672 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52b7595-c2f6-4d7b-a076-db28e0332d49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d64509908882d8a14b1836f1c667e92c12f407f5108009c8a63dd841d034afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae002ad3b0e97ea0ba6f6c7a1bee332c5a1644dabbe1af5ccf1b9325356c77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe23a7d9f061976dbd948d705fa9d73bd8a5f9f01304c4d966fcc4501d84cac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:25Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.681196 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:25Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.694273 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b070dee1026575fc13174bc4542b19d26a65f78951e524e035b6ce8be3d21e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:25Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.703011 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6hxtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f96e711-13fa-4105-b042-45fe046d3d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca0363b669895519cf2438c9ea033dc0227b3f6fea175835586f9dc1252688c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlbtc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6hxtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:25Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.712427 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ct8b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76546fe8-0dad-45f2-aac1-2ec02ec40898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0dc2e96af3e8efe18dbe2ac824f5305bfbc9fe047645fd495f9a6ca568ae1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggp5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ct8b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:25Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.728480 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9szlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc7963e-1bdc-4038-805e-bd72fc217a13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59b9be9d88ddb7f0a4b7c29467c616216c8b14502941cc1d6e43a922d903567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9szlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:25Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.742507 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80911936318b3a9de14a9f71a7437f0ee6c9fdcb56c2f4efc568a5543a6c86e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3060eb52c9c081c561e2b97588ea0418eadc0e227307715ef71d48fb42bbf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lbvl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:25Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.762547 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce405d19-c944-4a11-8195-bca9289b8d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wkm4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:25Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.785131 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6803a1fe-30ea-4c74-8a7f-743c3344d829\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925eeba4bab6bf2efe66b7e9bcc09451b9dd26cb3dc457fe4432b134ed68dfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://845157900bb10cd29ec008f561118df2cbfcc30684394dbc201ad8a966aa8e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e87a8afe124154c232c26f9a43969189b865225c0162799857847f10af1257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16072ce1c24706c267a0341779f05e661bac827ae871ecc732f81923bb63bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c692114482fe72631826c6af5eb14c5ef0fdfc5500ed0ea3fd96d6009d5be562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:25Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.790072 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.792263 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.792318 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.792336 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.792475 4858 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.800232 4858 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.800434 4858 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.801812 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://586c9f180c983beb2187348f2e68b1a8e550805c0f8035b72785f9845b00a14c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0629fa3631c6ec611ce9a9130532c9890bab8c4f4ca83042f9d4c8f1ec990690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:25Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.802114 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.802149 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.802166 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.802185 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.802201 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:25Z","lastTransitionTime":"2026-02-02T17:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.814934 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:25Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:25 crc kubenswrapper[4858]: E0202 17:15:25.815119 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b1513e64-30b8-48d2-874a-29d4cc9d3b3d\\\",\\\"systemUUID\\\":\\\"152e49e6-c863-4d14-b212-d4d9f0b62e1a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:25Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.818283 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.818321 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.818338 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.818359 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.818378 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:25Z","lastTransitionTime":"2026-02-02T17:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.831030 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce5b4fa8ae1fbd371b3a75bf6a2b061749d1b158fbf24b60a0a6fbebeee5e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:25Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:25 crc kubenswrapper[4858]: E0202 17:15:25.832837 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b1513e64-30b8-48d2-874a-29d4cc9d3b3d\\\",\\\"systemUUID\\\":\\\"152e49e6-c863-4d14-b212-d4d9f0b62e1a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:25Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.838149 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.838197 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.838212 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.838232 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.838244 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:25Z","lastTransitionTime":"2026-02-02T17:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:25 crc kubenswrapper[4858]: E0202 17:15:25.852047 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b1513e64-30b8-48d2-874a-29d4cc9d3b3d\\\",\\\"systemUUID\\\":\\\"152e49e6-c863-4d14-b212-d4d9f0b62e1a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:25Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.856240 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.856278 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.856290 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.856306 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.856318 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:25Z","lastTransitionTime":"2026-02-02T17:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:25 crc kubenswrapper[4858]: E0202 17:15:25.873398 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b1513e64-30b8-48d2-874a-29d4cc9d3b3d\\\",\\\"systemUUID\\\":\\\"152e49e6-c863-4d14-b212-d4d9f0b62e1a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:25Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.877130 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.877178 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.877190 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.877219 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.877231 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:25Z","lastTransitionTime":"2026-02-02T17:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:25 crc kubenswrapper[4858]: E0202 17:15:25.898055 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b1513e64-30b8-48d2-874a-29d4cc9d3b3d\\\",\\\"systemUUID\\\":\\\"152e49e6-c863-4d14-b212-d4d9f0b62e1a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:25Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:25 crc kubenswrapper[4858]: E0202 17:15:25.898169 4858 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.900364 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.900390 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.900400 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.900414 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:25 crc kubenswrapper[4858]: I0202 17:15:25.900423 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:25Z","lastTransitionTime":"2026-02-02T17:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.003440 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.003506 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.003519 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.003546 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.003561 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:26Z","lastTransitionTime":"2026-02-02T17:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.096868 4858 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.106155 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.106199 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.106211 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.106230 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.106245 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:26Z","lastTransitionTime":"2026-02-02T17:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.208675 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.208726 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.208738 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.208757 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.208772 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:26Z","lastTransitionTime":"2026-02-02T17:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.313036 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.313091 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.313108 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.313134 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.313151 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:26Z","lastTransitionTime":"2026-02-02T17:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.353166 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 22:47:01.888752833 +0000 UTC Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.400467 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:15:26 crc kubenswrapper[4858]: E0202 17:15:26.400699 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.401576 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:15:26 crc kubenswrapper[4858]: E0202 17:15:26.401691 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.401897 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:15:26 crc kubenswrapper[4858]: E0202 17:15:26.402024 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.418012 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.418088 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.418110 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.418138 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.418160 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:26Z","lastTransitionTime":"2026-02-02T17:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.522218 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.522292 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.522310 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.522336 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.522356 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:26Z","lastTransitionTime":"2026-02-02T17:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.587424 4858 generic.go:334] "Generic (PLEG): container finished" podID="341ca71a-aaf0-403c-8ecd-bbf2a70b031b" containerID="74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b" exitCode=0 Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.587503 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" event={"ID":"341ca71a-aaf0-403c-8ecd-bbf2a70b031b","Type":"ContainerDied","Data":"74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b"} Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.605423 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:26Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.624066 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52b7595-c2f6-4d7b-a076-db28e0332d49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d64509908882d8a14b1836f1c667e92c12f407f5108009c8a63dd841d034afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae002ad3b0e97ea0ba6f6c7a1bee332c5a1644dabbe1af5ccf1b9325356c77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe23a7d9f061976dbd948d705fa9d73bd8a5f9f01304c4d966fcc4501d84cac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:26Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.625106 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.625148 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.625163 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.625185 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.625200 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:26Z","lastTransitionTime":"2026-02-02T17:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.646714 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:26Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.660222 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b070dee1026575fc13174bc4542b19d26a65f78951e524e035b6ce8be3d21e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:26Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.670102 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6hxtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f96e711-13fa-4105-b042-45fe046d3d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca0363b669895519cf2438c9ea033dc0227b3f6fea175835586f9dc1252688c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlbtc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6hxtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:26Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.681516 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ct8b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76546fe8-0dad-45f2-aac1-2ec02ec40898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0dc2e96af3e8efe18dbe2ac824f5305bfbc9fe047645fd495f9a6ca568ae1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggp5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ct8b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:26Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.695250 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9szlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc7963e-1bdc-4038-805e-bd72fc217a13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59b9be9d88ddb7f0a4b7c29467c616216c8b14502941cc1d6e43a922d903567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9szlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:26Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.707724 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80911936318b3a9de14a9f71a7437f0ee6c9fdcb56c2f4efc568a5543a6c86e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3060eb52c9c081c561e2b97588ea0418eadc0e227307715ef71d48fb42bbf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lbvl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:26Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.728489 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.728535 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.728552 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.728574 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.728590 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:26Z","lastTransitionTime":"2026-02-02T17:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.729296 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce405d19-c944-4a11-8195-bca9289b8d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wkm4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:26Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.748568 4858 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.755926 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6803a1fe-30ea-4c74-8a7f-743c3344d829\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925eeba4bab6bf2efe66b7e9bcc09451b9dd26cb3dc457fe4432b134ed68dfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://845157900bb10cd29ec008f561118df2cbfcc30684394dbc201ad8a966aa8e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e87a8afe124154c232c26f9a43969189b865225c0162799857847f10af1257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16072ce1c24706c267a0341779f05e661bac827ae871ecc732f81923bb63bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c692114482fe72631826c6af5eb14c5ef0fdfc5500ed0ea3fd96d6009d5be562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:26Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.767875 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://586c9f180c983beb2187348f2e68b1a8e550805c0f8035b72785f9845b00a14c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0629fa3631c6ec611ce9a9130532c9890bab8c4f4ca83042f9d4c8f1ec990690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:26Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.783524 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:26Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.799902 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce5b4fa8ae1fbd371b3a75bf6a2b061749d1b158fbf24b60a0a6fbebeee5e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:26Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.813739 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:26Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.824233 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc5a816-d9d0-41c0-877c-250e077ef445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 17:15:14.099362 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 17:15:14.102453 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2083573188/tls.crt::/tmp/serving-cert-2083573188/tls.key\\\\\\\"\\\\nI0202 17:15:19.591422 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 17:15:19.597363 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 17:15:19.597387 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 17:15:19.597410 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 17:15:19.597416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 17:15:19.647613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 17:15:19.647820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 17:15:19.647642 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 17:15:19.647909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 17:15:19.648000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 17:15:19.648027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 17:15:19.650075 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:26Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.830871 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.830905 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.830918 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.830934 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.830949 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:26Z","lastTransitionTime":"2026-02-02T17:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.968543 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.968601 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.968618 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.968642 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:26 crc kubenswrapper[4858]: I0202 17:15:26.968660 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:26Z","lastTransitionTime":"2026-02-02T17:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.071900 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.071950 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.071971 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.072055 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.072075 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:27Z","lastTransitionTime":"2026-02-02T17:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.174504 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.174534 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.174546 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.174564 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.174576 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:27Z","lastTransitionTime":"2026-02-02T17:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.277383 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.277425 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.277438 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.277454 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.277466 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:27Z","lastTransitionTime":"2026-02-02T17:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.353522 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 00:01:35.312242099 +0000 UTC Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.380423 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.380481 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.380498 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.380522 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.380540 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:27Z","lastTransitionTime":"2026-02-02T17:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.483598 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.483659 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.483680 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.483703 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.483720 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:27Z","lastTransitionTime":"2026-02-02T17:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.586753 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.586833 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.586857 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.586891 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.586917 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:27Z","lastTransitionTime":"2026-02-02T17:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.597647 4858 generic.go:334] "Generic (PLEG): container finished" podID="341ca71a-aaf0-403c-8ecd-bbf2a70b031b" containerID="030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319" exitCode=0 Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.597735 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" event={"ID":"341ca71a-aaf0-403c-8ecd-bbf2a70b031b","Type":"ContainerDied","Data":"030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319"} Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.621357 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc5a816-d9d0-41c0-877c-250e077ef445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 17:15:14.099362 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 17:15:14.102453 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2083573188/tls.crt::/tmp/serving-cert-2083573188/tls.key\\\\\\\"\\\\nI0202 17:15:19.591422 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 17:15:19.597363 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 17:15:19.597387 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 17:15:19.597410 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 17:15:19.597416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 17:15:19.647613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 17:15:19.647820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 17:15:19.647642 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 17:15:19.647909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 17:15:19.648000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 17:15:19.648027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 17:15:19.650075 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:27Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.639721 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:27Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.661768 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52b7595-c2f6-4d7b-a076-db28e0332d49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d64509908882d8a14b1836f1c667e92c12f407f5108009c8a63dd841d034afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae002ad3b0e97ea0ba6f6c7a1bee332c5a1644dabbe1af5ccf1b9325356c77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe23a7d9f061976dbd948d705fa9d73bd8a5f9f01304c4d966fcc4501d84cac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:27Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.681295 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:27Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.695997 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.696023 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.696033 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.696047 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.696056 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:27Z","lastTransitionTime":"2026-02-02T17:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.699040 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:27Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.710526 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80911936318b3a9de14a9f71a7437f0ee6c9fdcb56c2f4efc568a5543a6c86e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3060eb52c9c081c561e2b97588ea0418eadc0e227307715ef71d48fb42bbf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lbvl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:27Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.726220 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce405d19-c944-4a11-8195-bca9289b8d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wkm4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:27Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.749323 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6803a1fe-30ea-4c74-8a7f-743c3344d829\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925eeba4bab6bf2efe66b7e9bcc09451b9dd26cb3dc457fe4432b134ed68dfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://845157900bb10cd29ec008f561118df2cbfcc30684394dbc201ad8a966aa8e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e87a8afe124154c232c26f9a43969189b865225c0162799857847f10af1257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16072ce1c24706c267a0341779f05e661bac827ae871ecc732f81923bb63bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c692114482fe72631826c6af5eb14c5ef0fdfc5500ed0ea3fd96d6009d5be562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:27Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.761386 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://586c9f180c983beb2187348f2e68b1a8e550805c0f8035b72785f9845b00a14c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0629fa3631c6ec611ce9a9130532c9890bab8c4f4ca83042f9d4c8f1ec990690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:27Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.773862 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b070dee1026575fc13174bc4542b19d26a65f78951e524e035b6ce8be3d21e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:27Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.783915 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6hxtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f96e711-13fa-4105-b042-45fe046d3d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca0363b669895519cf2438c9ea033dc0227b3f6fea175835586f9dc1252688c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlbtc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6hxtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:27Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.796165 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ct8b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76546fe8-0dad-45f2-aac1-2ec02ec40898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0dc2e96af3e8efe18dbe2ac824f5305bfbc9fe047645fd495f9a6ca568ae1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggp5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ct8b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:27Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.797819 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.797853 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.797864 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.797878 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.797887 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:27Z","lastTransitionTime":"2026-02-02T17:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.808040 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9szlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc7963e-1bdc-4038-805e-bd72fc217a13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59b9be9d88ddb7f0a4b7c29467c616216c8b14502941cc1d6e43a922d903567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9szlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:27Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.821866 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce5b4fa8ae1fbd371b3a75bf6a2b061749d1b158fbf24b60a0a6fbebeee5e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:27Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.833775 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:27Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.900343 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.900401 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.900479 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.900507 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:27 crc kubenswrapper[4858]: I0202 17:15:27.900562 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:27Z","lastTransitionTime":"2026-02-02T17:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.003017 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.003070 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.003085 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.003105 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.003118 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:28Z","lastTransitionTime":"2026-02-02T17:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.077261 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.077344 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.077378 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.077413 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:15:28 crc kubenswrapper[4858]: E0202 17:15:28.077464 4858 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 17:15:28 crc kubenswrapper[4858]: E0202 17:15:28.077511 4858 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 17:15:28 crc kubenswrapper[4858]: E0202 17:15:28.077527 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 17:15:28 crc kubenswrapper[4858]: E0202 17:15:28.077545 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 17:15:28 crc kubenswrapper[4858]: E0202 17:15:28.077557 4858 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 17:15:28 crc kubenswrapper[4858]: E0202 17:15:28.077568 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:15:36.077541764 +0000 UTC m=+37.229957059 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:15:28 crc kubenswrapper[4858]: E0202 17:15:28.077593 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 17:15:36.077582635 +0000 UTC m=+37.229997910 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 17:15:28 crc kubenswrapper[4858]: E0202 17:15:28.077609 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 17:15:36.077600895 +0000 UTC m=+37.230016170 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 17:15:28 crc kubenswrapper[4858]: E0202 17:15:28.077623 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 17:15:36.077616486 +0000 UTC m=+37.230031761 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.105213 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.105259 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.105273 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.105295 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.105314 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:28Z","lastTransitionTime":"2026-02-02T17:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.178962 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:15:28 crc kubenswrapper[4858]: E0202 17:15:28.179217 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 17:15:28 crc kubenswrapper[4858]: E0202 17:15:28.179243 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 17:15:28 crc kubenswrapper[4858]: E0202 17:15:28.179261 4858 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 17:15:28 crc kubenswrapper[4858]: E0202 17:15:28.179340 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 17:15:36.179319531 +0000 UTC m=+37.331734826 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.208063 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.208121 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.208146 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.208177 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.208200 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:28Z","lastTransitionTime":"2026-02-02T17:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.282388 4858 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.312551 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.312593 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.312603 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.312618 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.312630 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:28Z","lastTransitionTime":"2026-02-02T17:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.354234 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 16:16:24.029152519 +0000 UTC Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.400000 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.400085 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.400019 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:15:28 crc kubenswrapper[4858]: E0202 17:15:28.400194 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:15:28 crc kubenswrapper[4858]: E0202 17:15:28.400292 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:15:28 crc kubenswrapper[4858]: E0202 17:15:28.400367 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.415433 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.415512 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.415542 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.415575 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.415605 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:28Z","lastTransitionTime":"2026-02-02T17:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.518064 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.518121 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.518137 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.518167 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.518189 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:28Z","lastTransitionTime":"2026-02-02T17:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.608349 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" event={"ID":"ce405d19-c944-4a11-8195-bca9289b8d73","Type":"ContainerStarted","Data":"64c1fd9e48c7bd51e6478bda59b8d1e21c80c6d7dedf38a3fe752a3fa08e26a8"} Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.608807 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.608893 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.613962 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" event={"ID":"341ca71a-aaf0-403c-8ecd-bbf2a70b031b","Type":"ContainerStarted","Data":"5580328d3eb1cbccbf31e6a18c6589dd3745f01a1451a9811a838f03a79ef444"} Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.622675 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.622733 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.622752 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.622778 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.622797 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:28Z","lastTransitionTime":"2026-02-02T17:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.629396 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc5a816-d9d0-41c0-877c-250e077ef445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 17:15:14.099362 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 17:15:14.102453 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2083573188/tls.crt::/tmp/serving-cert-2083573188/tls.key\\\\\\\"\\\\nI0202 17:15:19.591422 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 17:15:19.597363 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 17:15:19.597387 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 17:15:19.597410 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 17:15:19.597416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 17:15:19.647613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 17:15:19.647820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 17:15:19.647642 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 17:15:19.647909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 17:15:19.648000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 17:15:19.648027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 17:15:19.650075 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:28Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.642578 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.646557 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.654233 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:28Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.666167 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:28Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.685426 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:28Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.697911 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52b7595-c2f6-4d7b-a076-db28e0332d49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d64509908882d8a14b1836f1c667e92c12f407f5108009c8a63dd841d034afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae002ad3b0e97ea0ba6f6c7a1bee332c5a1644dabbe1af5ccf1b9325356c77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe23a7d9f061976dbd948d705fa9d73bd8a5f9f01304c4d966fcc4501d84cac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:28Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.709083 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://586c9f180c983beb2187348f2e68b1a8e550805c0f8035b72785f9845b00a14c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0629fa3631c6ec611ce9a9130532c9890bab8c4f4ca83042f9d4c8f1ec990690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:28Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.720373 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b070dee1026575fc13174bc4542b19d26a65f78951e524e035b6ce8be3d21e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:28Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.725437 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.725500 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.725514 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.725534 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.725552 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:28Z","lastTransitionTime":"2026-02-02T17:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.730547 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6hxtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f96e711-13fa-4105-b042-45fe046d3d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca0363b669895519cf2438c9ea033dc0227b3f6fea175835586f9dc1252688c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlbtc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6hxtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:28Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.739788 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ct8b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76546fe8-0dad-45f2-aac1-2ec02ec40898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0dc2e96af3e8efe18dbe2ac824f5305bfbc9fe047645fd495f9a6ca568ae1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggp5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ct8b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:28Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.750920 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9szlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc7963e-1bdc-4038-805e-bd72fc217a13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59b9be9d88ddb7f0a4b7c29467c616216c8b14502941cc1d6e43a922d903567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9szlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:28Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.760904 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80911936318b3a9de14a9f71a7437f0ee6c9fdcb56c2f4efc568a5543a6c86e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3060eb52c9c081c561e2b97588ea0418eadc0e227307715ef71d48fb42bbf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lbvl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:28Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.782103 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce405d19-c944-4a11-8195-bca9289b8d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c1fd9e48c7bd51e6478bda59b8d1e21c80c6d7dedf38a3fe752a3fa08e26a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wkm4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:28Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.801421 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6803a1fe-30ea-4c74-8a7f-743c3344d829\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925eeba4bab6bf2efe66b7e9bcc09451b9dd26cb3dc457fe4432b134ed68dfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://845157900bb10cd29ec008f561118df2cbfcc30684394dbc201ad8a966aa8e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e87a8afe124154c232c26f9a43969189b865225c0162799857847f10af1257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16072ce1c24706c267a0341779f05e661bac827ae871ecc732f81923bb63bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c692114482fe72631826c6af5eb14c5ef0fdfc5500ed0ea3fd96d6009d5be562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:28Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.812826 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce5b4fa8ae1fbd371b3a75bf6a2b061749d1b158fbf24b60a0a6fbebeee5e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:28Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.822796 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:28Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.827890 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.827934 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.827945 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.827962 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.828003 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:28Z","lastTransitionTime":"2026-02-02T17:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.834209 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce5b4fa8ae1fbd371b3a75bf6a2b061749d1b158fbf24b60a0a6fbebeee5e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:28Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.845233 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:28Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.856655 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc5a816-d9d0-41c0-877c-250e077ef445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 17:15:14.099362 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 17:15:14.102453 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2083573188/tls.crt::/tmp/serving-cert-2083573188/tls.key\\\\\\\"\\\\nI0202 17:15:19.591422 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 17:15:19.597363 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 17:15:19.597387 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 17:15:19.597410 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 17:15:19.597416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 17:15:19.647613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 17:15:19.647820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 17:15:19.647642 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 17:15:19.647909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 17:15:19.648000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 17:15:19.648027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 17:15:19.650075 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:28Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.871743 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580328d3eb1cbccbf31e6a18c6589dd3745f01a1451a9811a838f03a79ef444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:28Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.882713 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52b7595-c2f6-4d7b-a076-db28e0332d49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d64509908882d8a14b1836f1c667e92c12f407f5108009c8a63dd841d034afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae002ad3b0e97ea0ba6f6c7a1bee332c5a1644dabbe1af5ccf1b9325356c77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe23a7d9f061976dbd948d705fa9d73bd8a5f9f01304c4d966fcc4501d84cac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:28Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.896506 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:28Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.912375 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:28Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.927963 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6hxtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f96e711-13fa-4105-b042-45fe046d3d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca0363b669895519cf2438c9ea033dc0227b3f6fea175835586f9dc1252688c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlbtc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6hxtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:28Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.929698 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.929821 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.929884 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.929957 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.930040 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:28Z","lastTransitionTime":"2026-02-02T17:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.939527 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ct8b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76546fe8-0dad-45f2-aac1-2ec02ec40898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0dc2e96af3e8efe18dbe2ac824f5305bfbc9fe047645fd495f9a6ca568ae1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggp5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ct8b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:28Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.950773 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9szlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc7963e-1bdc-4038-805e-bd72fc217a13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59b9be9d88ddb7f0a4b7c29467c616216c8b14502941cc1d6e43a922d903567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9szlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:28Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.961291 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80911936318b3a9de14a9f71a7437f0ee6c9fdcb56c2f4efc568a5543a6c86e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3060eb52c9c081c561e2b97588ea0418eadc0e227307715ef71d48fb42bbf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lbvl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:28Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.978174 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce405d19-c944-4a11-8195-bca9289b8d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c1fd9e48c7bd51e6478bda59b8d1e21c80c6d7dedf38a3fe752a3fa08e26a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wkm4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:28Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:28 crc kubenswrapper[4858]: I0202 17:15:28.999080 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6803a1fe-30ea-4c74-8a7f-743c3344d829\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925eeba4bab6bf2efe66b7e9bcc09451b9dd26cb3dc457fe4432b134ed68dfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://845157900bb10cd29ec008f561118df2cbfcc30684394dbc201ad8a966aa8e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e87a8afe124154c232c26f9a43969189b865225c0162799857847f10af1257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16072ce1c24706c267a0341779f05e661bac827ae871ecc732f81923bb63bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c692114482fe72631826c6af5eb14c5ef0fdfc5500ed0ea3fd96d6009d5be562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:28Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.011412 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://586c9f180c983beb2187348f2e68b1a8e550805c0f8035b72785f9845b00a14c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0629fa3631c6ec611ce9a9130532c9890bab8c4f4ca83042f9d4c8f1ec990690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:29Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.021764 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b070dee1026575fc13174bc4542b19d26a65f78951e524e035b6ce8be3d21e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:29Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.032313 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.032339 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.032349 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.032363 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.032372 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:29Z","lastTransitionTime":"2026-02-02T17:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.135385 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.135438 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.135467 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.135483 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.135687 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:29Z","lastTransitionTime":"2026-02-02T17:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.239096 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.239167 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.239190 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.239220 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.239257 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:29Z","lastTransitionTime":"2026-02-02T17:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.342683 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.342721 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.342729 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.342743 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.342751 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:29Z","lastTransitionTime":"2026-02-02T17:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.354489 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 23:37:00.990362512 +0000 UTC Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.445134 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.445166 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.445176 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.445194 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.445204 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:29Z","lastTransitionTime":"2026-02-02T17:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.548106 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.548220 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.548246 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.548286 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.548307 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:29Z","lastTransitionTime":"2026-02-02T17:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.618162 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.651159 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.651242 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.651265 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.651296 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.651319 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:29Z","lastTransitionTime":"2026-02-02T17:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.754646 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.754703 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.754719 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.754742 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.754759 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:29Z","lastTransitionTime":"2026-02-02T17:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.862411 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.862474 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.862494 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.862524 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.862547 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:29Z","lastTransitionTime":"2026-02-02T17:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.965556 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.965609 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.965622 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.965639 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:29 crc kubenswrapper[4858]: I0202 17:15:29.965652 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:29Z","lastTransitionTime":"2026-02-02T17:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.067919 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.067966 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.067998 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.068017 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.068030 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:30Z","lastTransitionTime":"2026-02-02T17:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.171289 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.171335 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.171347 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.171367 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.171380 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:30Z","lastTransitionTime":"2026-02-02T17:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.274692 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.274741 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.274755 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.274774 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.274790 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:30Z","lastTransitionTime":"2026-02-02T17:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.355557 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 03:04:32.657466361 +0000 UTC Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.378042 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.378075 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.378083 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.378097 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.378105 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:30Z","lastTransitionTime":"2026-02-02T17:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.400423 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.400508 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:15:30 crc kubenswrapper[4858]: E0202 17:15:30.400557 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.400615 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:15:30 crc kubenswrapper[4858]: E0202 17:15:30.400712 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:15:30 crc kubenswrapper[4858]: E0202 17:15:30.400774 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.413648 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce5b4fa8ae1fbd371b3a75bf6a2b061749d1b158fbf24b60a0a6fbebeee5e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:30Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.424742 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:30Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.438504 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc5a816-d9d0-41c0-877c-250e077ef445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 17:15:14.099362 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 17:15:14.102453 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2083573188/tls.crt::/tmp/serving-cert-2083573188/tls.key\\\\\\\"\\\\nI0202 17:15:19.591422 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 17:15:19.597363 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 17:15:19.597387 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 17:15:19.597410 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 17:15:19.597416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 17:15:19.647613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 17:15:19.647820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 17:15:19.647642 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 17:15:19.647909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 17:15:19.648000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 17:15:19.648027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 17:15:19.650075 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:30Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.456518 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580328d3eb1cbccbf31e6a18c6589dd3745f01a1451a9811a838f03a79ef444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:30Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.470744 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52b7595-c2f6-4d7b-a076-db28e0332d49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d64509908882d8a14b1836f1c667e92c12f407f5108009c8a63dd841d034afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae002ad3b0e97ea0ba6f6c7a1bee332c5a1644dabbe1af5ccf1b9325356c77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe23a7d9f061976dbd948d705fa9d73bd8a5f9f01304c4d966fcc4501d84cac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:30Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.480914 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.481811 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.481851 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.481881 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.481898 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:30Z","lastTransitionTime":"2026-02-02T17:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.494895 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:30Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.510556 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:30Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.528416 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6hxtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f96e711-13fa-4105-b042-45fe046d3d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca0363b669895519cf2438c9ea033dc0227b3f6fea175835586f9dc1252688c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlbtc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6hxtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:30Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.538386 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ct8b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76546fe8-0dad-45f2-aac1-2ec02ec40898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0dc2e96af3e8efe18dbe2ac824f5305bfbc9fe047645fd495f9a6ca568ae1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggp5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ct8b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:30Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.553869 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9szlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc7963e-1bdc-4038-805e-bd72fc217a13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59b9be9d88ddb7f0a4b7c29467c616216c8b14502941cc1d6e43a922d903567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9szlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:30Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.566477 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80911936318b3a9de14a9f71a7437f0ee6c9fdcb56c2f4efc568a5543a6c86e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3060eb52c9c081c561e2b97588ea0418eadc0e227307715ef71d48fb42bbf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lbvl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:30Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.584406 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.584444 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.584455 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.584472 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.584484 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:30Z","lastTransitionTime":"2026-02-02T17:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.585284 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce405d19-c944-4a11-8195-bca9289b8d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c1fd9e48c7bd51e6478bda59b8d1e21c80c6d7dedf38a3fe752a3fa08e26a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wkm4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:30Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.603294 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6803a1fe-30ea-4c74-8a7f-743c3344d829\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925eeba4bab6bf2efe66b7e9bcc09451b9dd26cb3dc457fe4432b134ed68dfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://845157900bb10cd29ec008f561118df2cbfcc30684394dbc201ad8a966aa8e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e87a8afe124154c232c26f9a43969189b865225c0162799857847f10af1257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16072ce1c24706c267a0341779f05e661bac827ae871ecc732f81923bb63bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c692114482fe72631826c6af5eb14c5ef0fdfc5500ed0ea3fd96d6009d5be562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:30Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.615887 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://586c9f180c983beb2187348f2e68b1a8e550805c0f8035b72785f9845b00a14c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0629fa3631c6ec611ce9a9130532c9890bab8c4f4ca83042f9d4c8f1ec990690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:30Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.620140 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.629026 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b070dee1026575fc13174bc4542b19d26a65f78951e524e035b6ce8be3d21e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:30Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.695432 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.695470 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.695480 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.695497 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.695507 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:30Z","lastTransitionTime":"2026-02-02T17:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.798261 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.798290 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.798297 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.798310 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.798347 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:30Z","lastTransitionTime":"2026-02-02T17:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.901024 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.901085 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.901104 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.901130 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:30 crc kubenswrapper[4858]: I0202 17:15:30.901148 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:30Z","lastTransitionTime":"2026-02-02T17:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.004515 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.004571 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.004586 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.004607 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.004620 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:31Z","lastTransitionTime":"2026-02-02T17:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.107484 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.107523 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.107534 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.107551 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.107563 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:31Z","lastTransitionTime":"2026-02-02T17:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.209941 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.210024 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.210037 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.210055 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.210070 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:31Z","lastTransitionTime":"2026-02-02T17:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.312680 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.312722 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.312733 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.312749 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.312759 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:31Z","lastTransitionTime":"2026-02-02T17:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.356360 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 02:01:33.208744941 +0000 UTC Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.417024 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.417066 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.417077 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.417091 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.417120 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:31Z","lastTransitionTime":"2026-02-02T17:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.520087 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.520146 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.520160 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.520181 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.520195 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:31Z","lastTransitionTime":"2026-02-02T17:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.622309 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.622379 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.622400 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.622432 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.622454 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:31Z","lastTransitionTime":"2026-02-02T17:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.625331 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wkm4w_ce405d19-c944-4a11-8195-bca9289b8d73/ovnkube-controller/0.log" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.628143 4858 generic.go:334] "Generic (PLEG): container finished" podID="ce405d19-c944-4a11-8195-bca9289b8d73" containerID="64c1fd9e48c7bd51e6478bda59b8d1e21c80c6d7dedf38a3fe752a3fa08e26a8" exitCode=1 Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.628178 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" event={"ID":"ce405d19-c944-4a11-8195-bca9289b8d73","Type":"ContainerDied","Data":"64c1fd9e48c7bd51e6478bda59b8d1e21c80c6d7dedf38a3fe752a3fa08e26a8"} Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.629410 4858 scope.go:117] "RemoveContainer" containerID="64c1fd9e48c7bd51e6478bda59b8d1e21c80c6d7dedf38a3fe752a3fa08e26a8" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.653350 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc5a816-d9d0-41c0-877c-250e077ef445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 17:15:14.099362 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 17:15:14.102453 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2083573188/tls.crt::/tmp/serving-cert-2083573188/tls.key\\\\\\\"\\\\nI0202 17:15:19.591422 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 17:15:19.597363 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 17:15:19.597387 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 17:15:19.597410 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 17:15:19.597416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 17:15:19.647613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 17:15:19.647820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 17:15:19.647642 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 17:15:19.647909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 17:15:19.648000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 17:15:19.648027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 17:15:19.650075 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:31Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.672265 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580328d3eb1cbccbf31e6a18c6589dd3745f01a1451a9811a838f03a79ef444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:31Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.688180 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52b7595-c2f6-4d7b-a076-db28e0332d49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d64509908882d8a14b1836f1c667e92c12f407f5108009c8a63dd841d034afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae002ad3b0e97ea0ba6f6c7a1bee332c5a1644dabbe1af5ccf1b9325356c77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe23a7d9f061976dbd948d705fa9d73bd8a5f9f01304c4d966fcc4501d84cac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:31Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.703113 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:31Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.717138 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:31Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.725376 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.725566 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.725637 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.725747 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.725822 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:31Z","lastTransitionTime":"2026-02-02T17:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.731187 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80911936318b3a9de14a9f71a7437f0ee6c9fdcb56c2f4efc568a5543a6c86e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3060eb52c9c081c561e2b97588ea0418eadc0e227307715ef71d48fb42bbf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lbvl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:31Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.754325 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce405d19-c944-4a11-8195-bca9289b8d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c1fd9e48c7bd51e6478bda59b8d1e21c80c6d7dedf38a3fe752a3fa08e26a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c1fd9e48c7bd51e6478bda59b8d1e21c80c6d7dedf38a3fe752a3fa08e26a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T17:15:31Z\\\",\\\"message\\\":\\\"lector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 17:15:30.910194 6163 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 17:15:30.910257 6163 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 17:15:30.910675 6163 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 17:15:30.910709 6163 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 17:15:30.910736 6163 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 17:15:30.910777 6163 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 17:15:30.910797 6163 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 17:15:30.910803 6163 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 17:15:30.910849 6163 factory.go:656] Stopping watch factory\\\\nI0202 17:15:30.910865 6163 ovnkube.go:599] Stopped ovnkube\\\\nI0202 17:15:30.910880 6163 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 17:15:30.910904 6163 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 17:15:30.910909 6163 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wkm4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:31Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.779091 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6803a1fe-30ea-4c74-8a7f-743c3344d829\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925eeba4bab6bf2efe66b7e9bcc09451b9dd26cb3dc457fe4432b134ed68dfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://845157900bb10cd29ec008f561118df2cbfcc30684394dbc201ad8a966aa8e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e87a8afe124154c232c26f9a43969189b865225c0162799857847f10af1257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16072ce1c24706c267a0341779f05e661bac827ae871ecc732f81923bb63bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c692114482fe72631826c6af5eb14c5ef0fdfc5500ed0ea3fd96d6009d5be562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:31Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.796101 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://586c9f180c983beb2187348f2e68b1a8e550805c0f8035b72785f9845b00a14c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0629fa3631c6ec611ce9a9130532c9890bab8c4f4ca83042f9d4c8f1ec990690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:31Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.809494 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b070dee1026575fc13174bc4542b19d26a65f78951e524e035b6ce8be3d21e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:31Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.819748 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6hxtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f96e711-13fa-4105-b042-45fe046d3d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca0363b669895519cf2438c9ea033dc0227b3f6fea175835586f9dc1252688c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlbtc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6hxtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:31Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.827464 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.827499 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.827508 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.827521 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.827530 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:31Z","lastTransitionTime":"2026-02-02T17:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.829787 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ct8b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76546fe8-0dad-45f2-aac1-2ec02ec40898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0dc2e96af3e8efe18dbe2ac824f5305bfbc9fe047645fd495f9a6ca568ae1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggp5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ct8b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:31Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.840450 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9szlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc7963e-1bdc-4038-805e-bd72fc217a13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59b9be9d88ddb7f0a4b7c29467c616216c8b14502941cc1d6e43a922d903567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9szlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:31Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.851805 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce5b4fa8ae1fbd371b3a75bf6a2b061749d1b158fbf24b60a0a6fbebeee5e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:31Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.864142 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:31Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.930720 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.930758 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.930771 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.930785 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:31 crc kubenswrapper[4858]: I0202 17:15:31.930797 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:31Z","lastTransitionTime":"2026-02-02T17:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.032949 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.033001 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.033010 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.033023 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.033032 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:32Z","lastTransitionTime":"2026-02-02T17:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.135710 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.135769 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.135782 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.135806 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.135822 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:32Z","lastTransitionTime":"2026-02-02T17:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.238237 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.238310 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.238334 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.238367 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.238391 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:32Z","lastTransitionTime":"2026-02-02T17:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.341440 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.341490 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.341499 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.341514 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.341523 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:32Z","lastTransitionTime":"2026-02-02T17:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.356965 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 22:59:09.983580094 +0000 UTC Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.400346 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.400380 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:15:32 crc kubenswrapper[4858]: E0202 17:15:32.400540 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.400624 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:15:32 crc kubenswrapper[4858]: E0202 17:15:32.400765 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:15:32 crc kubenswrapper[4858]: E0202 17:15:32.400967 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.434493 4858 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.444562 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.444600 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.444613 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.444632 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.444642 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:32Z","lastTransitionTime":"2026-02-02T17:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.547505 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.547537 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.547546 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.547561 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.547573 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:32Z","lastTransitionTime":"2026-02-02T17:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.635270 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wkm4w_ce405d19-c944-4a11-8195-bca9289b8d73/ovnkube-controller/0.log" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.650761 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.650824 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.650842 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.650866 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.650883 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:32Z","lastTransitionTime":"2026-02-02T17:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.676832 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" event={"ID":"ce405d19-c944-4a11-8195-bca9289b8d73","Type":"ContainerStarted","Data":"51d9396f44f74445d0964a856afd2c936bd5d901265bcf8a214ddcbd748aa377"} Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.677110 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.694302 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b070dee1026575fc13174bc4542b19d26a65f78951e524e035b6ce8be3d21e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:32Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.710570 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6hxtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f96e711-13fa-4105-b042-45fe046d3d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca0363b669895519cf2438c9ea033dc0227b3f6fea175835586f9dc1252688c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlbtc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6hxtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:32Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.728089 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ct8b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76546fe8-0dad-45f2-aac1-2ec02ec40898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0dc2e96af3e8efe18dbe2ac824f5305bfbc9fe047645fd495f9a6ca568ae1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggp5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ct8b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:32Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.750075 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9szlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc7963e-1bdc-4038-805e-bd72fc217a13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59b9be9d88ddb7f0a4b7c29467c616216c8b14502941cc1d6e43a922d903567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9szlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:32Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.755066 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.755144 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.755172 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.755201 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.755223 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:32Z","lastTransitionTime":"2026-02-02T17:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.771741 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80911936318b3a9de14a9f71a7437f0ee6c9fdcb56c2f4efc568a5543a6c86e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3060eb52c9c081c561e2b97588ea0418eadc0e227307715ef71d48fb42bbf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lbvl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:32Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.803631 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce405d19-c944-4a11-8195-bca9289b8d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d9396f44f74445d0964a856afd2c936bd5d901265bcf8a214ddcbd748aa377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c1fd9e48c7bd51e6478bda59b8d1e21c80c6d7dedf38a3fe752a3fa08e26a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T17:15:31Z\\\",\\\"message\\\":\\\"lector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 17:15:30.910194 6163 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 17:15:30.910257 6163 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 17:15:30.910675 6163 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 17:15:30.910709 6163 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 17:15:30.910736 6163 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 17:15:30.910777 6163 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 17:15:30.910797 6163 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 17:15:30.910803 6163 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 17:15:30.910849 6163 factory.go:656] Stopping watch factory\\\\nI0202 17:15:30.910865 6163 ovnkube.go:599] Stopped ovnkube\\\\nI0202 17:15:30.910880 6163 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 17:15:30.910904 6163 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 17:15:30.910909 6163 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wkm4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:32Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.840235 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6803a1fe-30ea-4c74-8a7f-743c3344d829\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925eeba4bab6bf2efe66b7e9bcc09451b9dd26cb3dc457fe4432b134ed68dfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://845157900bb10cd29ec008f561118df2cbfcc30684394dbc201ad8a966aa8e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e87a8afe124154c232c26f9a43969189b865225c0162799857847f10af1257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16072ce1c24706c267a0341779f05e661bac827ae871ecc732f81923bb63bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c692114482fe72631826c6af5eb14c5ef0fdfc5500ed0ea3fd96d6009d5be562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:32Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.856182 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://586c9f180c983beb2187348f2e68b1a8e550805c0f8035b72785f9845b00a14c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0629fa3631c6ec611ce9a9130532c9890bab8c4f4ca83042f9d4c8f1ec990690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:32Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.857517 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.857578 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.857591 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.857613 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.857624 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:32Z","lastTransitionTime":"2026-02-02T17:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.872494 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:32Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.894003 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce5b4fa8ae1fbd371b3a75bf6a2b061749d1b158fbf24b60a0a6fbebeee5e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:32Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.915613 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580328d3eb1cbccbf31e6a18c6589dd3745f01a1451a9811a838f03a79ef444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:32Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.936425 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc5a816-d9d0-41c0-877c-250e077ef445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 17:15:14.099362 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 17:15:14.102453 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2083573188/tls.crt::/tmp/serving-cert-2083573188/tls.key\\\\\\\"\\\\nI0202 17:15:19.591422 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 17:15:19.597363 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 17:15:19.597387 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 17:15:19.597410 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 17:15:19.597416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 17:15:19.647613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 17:15:19.647820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 17:15:19.647642 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 17:15:19.647909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 17:15:19.648000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 17:15:19.648027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 17:15:19.650075 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:32Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.956781 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:32Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.961432 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.961487 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.961506 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.961532 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.961550 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:32Z","lastTransitionTime":"2026-02-02T17:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:32 crc kubenswrapper[4858]: I0202 17:15:32.981744 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52b7595-c2f6-4d7b-a076-db28e0332d49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d64509908882d8a14b1836f1c667e92c12f407f5108009c8a63dd841d034afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae002ad3b0e97ea0ba6f6c7a1bee332c5a1644dabbe1af5ccf1b9325356c77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe23a7d9f061976dbd948d705fa9d73bd8a5f9f01304c4d966fcc4501d84cac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:32Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.001397 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:32Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.064879 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.064945 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.064963 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.065017 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.065036 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:33Z","lastTransitionTime":"2026-02-02T17:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.159040 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhk49"] Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.159855 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhk49" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.162569 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.164460 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.168457 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.168636 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.168665 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.168942 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.169060 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:33Z","lastTransitionTime":"2026-02-02T17:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.185441 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc5a816-d9d0-41c0-877c-250e077ef445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 17:15:14.099362 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 17:15:14.102453 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2083573188/tls.crt::/tmp/serving-cert-2083573188/tls.key\\\\\\\"\\\\nI0202 17:15:19.591422 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 17:15:19.597363 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 17:15:19.597387 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 17:15:19.597410 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 17:15:19.597416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 17:15:19.647613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 17:15:19.647820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 17:15:19.647642 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 17:15:19.647909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 17:15:19.648000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 17:15:19.648027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 17:15:19.650075 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:33Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.207069 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580328d3eb1cbccbf31e6a18c6589dd3745f01a1451a9811a838f03a79ef444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:33Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.223589 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52b7595-c2f6-4d7b-a076-db28e0332d49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d64509908882d8a14b1836f1c667e92c12f407f5108009c8a63dd841d034afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae002ad3b0e97ea0ba6f6c7a1bee332c5a1644dabbe1af5ccf1b9325356c77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe23a7d9f061976dbd948d705fa9d73bd8a5f9f01304c4d966fcc4501d84cac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:33Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.241499 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:33Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.257318 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:33Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.271940 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.272021 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.272037 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.272059 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.272073 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:33Z","lastTransitionTime":"2026-02-02T17:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.274543 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80911936318b3a9de14a9f71a7437f0ee6c9fdcb56c2f4efc568a5543a6c86e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3060eb52c9c081c561e2b97588ea0418eadc0e227307715ef71d48fb42bbf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lbvl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:33Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.325930 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce405d19-c944-4a11-8195-bca9289b8d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d9396f44f74445d0964a856afd2c936bd5d901265bcf8a214ddcbd748aa377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c1fd9e48c7bd51e6478bda59b8d1e21c80c6d7dedf38a3fe752a3fa08e26a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T17:15:31Z\\\",\\\"message\\\":\\\"lector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 17:15:30.910194 6163 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 17:15:30.910257 6163 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 17:15:30.910675 6163 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 17:15:30.910709 6163 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 17:15:30.910736 6163 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 17:15:30.910777 6163 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 17:15:30.910797 6163 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 17:15:30.910803 6163 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 17:15:30.910849 6163 factory.go:656] Stopping watch factory\\\\nI0202 17:15:30.910865 6163 ovnkube.go:599] Stopped ovnkube\\\\nI0202 17:15:30.910880 6163 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 17:15:30.910904 6163 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 17:15:30.910909 6163 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wkm4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:33Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.334251 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8c3c55b8-1193-47e3-ae7a-3a4b06df2884-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zhk49\" (UID: \"8c3c55b8-1193-47e3-ae7a-3a4b06df2884\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhk49" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.334336 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8c3c55b8-1193-47e3-ae7a-3a4b06df2884-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zhk49\" (UID: \"8c3c55b8-1193-47e3-ae7a-3a4b06df2884\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhk49" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.334384 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9ksp\" (UniqueName: \"kubernetes.io/projected/8c3c55b8-1193-47e3-ae7a-3a4b06df2884-kube-api-access-x9ksp\") pod \"ovnkube-control-plane-749d76644c-zhk49\" (UID: \"8c3c55b8-1193-47e3-ae7a-3a4b06df2884\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhk49" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.334507 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8c3c55b8-1193-47e3-ae7a-3a4b06df2884-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zhk49\" (UID: \"8c3c55b8-1193-47e3-ae7a-3a4b06df2884\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhk49" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.357946 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 06:24:54.533441914 +0000 UTC Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.359719 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6803a1fe-30ea-4c74-8a7f-743c3344d829\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925eeba4bab6bf2efe66b7e9bcc09451b9dd26cb3dc457fe4432b134ed68dfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://845157900bb10cd29ec008f561118df2cbfcc30684394dbc201ad8a966aa8e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e87a8afe124154c232c26f9a43969189b865225c0162799857847f10af1257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16072ce1c24706c267a0341779f05e661bac827ae871ecc732f81923bb63bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c692114482fe72631826c6af5eb14c5ef0fdfc5500ed0ea3fd96d6009d5be562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:33Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.374363 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.374421 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.374438 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.374469 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.374491 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:33Z","lastTransitionTime":"2026-02-02T17:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.382941 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://586c9f180c983beb2187348f2e68b1a8e550805c0f8035b72785f9845b00a14c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0629fa3631c6ec611ce9a9130532c9890bab8c4f4ca83042f9d4c8f1ec990690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:33Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.403208 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b070dee1026575fc13174bc4542b19d26a65f78951e524e035b6ce8be3d21e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:33Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.415402 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6hxtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f96e711-13fa-4105-b042-45fe046d3d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca0363b669895519cf2438c9ea033dc0227b3f6fea175835586f9dc1252688c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlbtc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6hxtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:33Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.427050 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ct8b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76546fe8-0dad-45f2-aac1-2ec02ec40898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0dc2e96af3e8efe18dbe2ac824f5305bfbc9fe047645fd495f9a6ca568ae1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggp5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ct8b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:33Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.435599 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8c3c55b8-1193-47e3-ae7a-3a4b06df2884-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zhk49\" (UID: \"8c3c55b8-1193-47e3-ae7a-3a4b06df2884\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhk49" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.435672 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9ksp\" (UniqueName: \"kubernetes.io/projected/8c3c55b8-1193-47e3-ae7a-3a4b06df2884-kube-api-access-x9ksp\") pod \"ovnkube-control-plane-749d76644c-zhk49\" (UID: \"8c3c55b8-1193-47e3-ae7a-3a4b06df2884\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhk49" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.435707 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8c3c55b8-1193-47e3-ae7a-3a4b06df2884-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zhk49\" (UID: \"8c3c55b8-1193-47e3-ae7a-3a4b06df2884\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhk49" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.435784 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8c3c55b8-1193-47e3-ae7a-3a4b06df2884-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zhk49\" (UID: \"8c3c55b8-1193-47e3-ae7a-3a4b06df2884\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhk49" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.436943 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8c3c55b8-1193-47e3-ae7a-3a4b06df2884-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zhk49\" (UID: \"8c3c55b8-1193-47e3-ae7a-3a4b06df2884\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhk49" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.436954 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8c3c55b8-1193-47e3-ae7a-3a4b06df2884-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zhk49\" (UID: \"8c3c55b8-1193-47e3-ae7a-3a4b06df2884\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhk49" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.443334 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9szlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc7963e-1bdc-4038-805e-bd72fc217a13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59b9be9d88ddb7f0a4b7c29467c616216c8b14502941cc1d6e43a922d903567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9szlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:33Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.445514 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8c3c55b8-1193-47e3-ae7a-3a4b06df2884-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zhk49\" (UID: \"8c3c55b8-1193-47e3-ae7a-3a4b06df2884\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhk49" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.458739 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce5b4fa8ae1fbd371b3a75bf6a2b061749d1b158fbf24b60a0a6fbebeee5e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:33Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.470170 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9ksp\" (UniqueName: \"kubernetes.io/projected/8c3c55b8-1193-47e3-ae7a-3a4b06df2884-kube-api-access-x9ksp\") pod \"ovnkube-control-plane-749d76644c-zhk49\" (UID: \"8c3c55b8-1193-47e3-ae7a-3a4b06df2884\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhk49" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.478112 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.478175 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.478193 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.478220 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.478241 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:33Z","lastTransitionTime":"2026-02-02T17:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.480179 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:33Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.483410 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhk49" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.501149 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhk49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3c55b8-1193-47e3-ae7a-3a4b06df2884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ksp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ksp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zhk49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:33Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:33 crc kubenswrapper[4858]: W0202 17:15:33.505327 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c3c55b8_1193_47e3_ae7a_3a4b06df2884.slice/crio-03784f00a119b62bcdd809001af926f5dec20172e82af3b515e6f582e1770e45 WatchSource:0}: Error finding container 03784f00a119b62bcdd809001af926f5dec20172e82af3b515e6f582e1770e45: Status 404 returned error can't find the container with id 03784f00a119b62bcdd809001af926f5dec20172e82af3b515e6f582e1770e45 Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.580399 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.580441 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.580452 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.580468 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.580479 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:33Z","lastTransitionTime":"2026-02-02T17:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.680503 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhk49" event={"ID":"8c3c55b8-1193-47e3-ae7a-3a4b06df2884","Type":"ContainerStarted","Data":"03784f00a119b62bcdd809001af926f5dec20172e82af3b515e6f582e1770e45"} Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.681797 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.681826 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.681836 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.681850 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.681862 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:33Z","lastTransitionTime":"2026-02-02T17:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.682300 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wkm4w_ce405d19-c944-4a11-8195-bca9289b8d73/ovnkube-controller/1.log" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.682816 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wkm4w_ce405d19-c944-4a11-8195-bca9289b8d73/ovnkube-controller/0.log" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.685115 4858 generic.go:334] "Generic (PLEG): container finished" podID="ce405d19-c944-4a11-8195-bca9289b8d73" containerID="51d9396f44f74445d0964a856afd2c936bd5d901265bcf8a214ddcbd748aa377" exitCode=1 Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.685137 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" event={"ID":"ce405d19-c944-4a11-8195-bca9289b8d73","Type":"ContainerDied","Data":"51d9396f44f74445d0964a856afd2c936bd5d901265bcf8a214ddcbd748aa377"} Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.685167 4858 scope.go:117] "RemoveContainer" containerID="64c1fd9e48c7bd51e6478bda59b8d1e21c80c6d7dedf38a3fe752a3fa08e26a8" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.685690 4858 scope.go:117] "RemoveContainer" containerID="51d9396f44f74445d0964a856afd2c936bd5d901265bcf8a214ddcbd748aa377" Feb 02 17:15:33 crc kubenswrapper[4858]: E0202 17:15:33.685860 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-wkm4w_openshift-ovn-kubernetes(ce405d19-c944-4a11-8195-bca9289b8d73)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.700692 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:33Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.716880 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhk49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3c55b8-1193-47e3-ae7a-3a4b06df2884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ksp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ksp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zhk49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:33Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.732614 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce5b4fa8ae1fbd371b3a75bf6a2b061749d1b158fbf24b60a0a6fbebeee5e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:33Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.747796 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580328d3eb1cbccbf31e6a18c6589dd3745f01a1451a9811a838f03a79ef444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:33Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.764315 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc5a816-d9d0-41c0-877c-250e077ef445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 17:15:14.099362 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 17:15:14.102453 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2083573188/tls.crt::/tmp/serving-cert-2083573188/tls.key\\\\\\\"\\\\nI0202 17:15:19.591422 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 17:15:19.597363 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 17:15:19.597387 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 17:15:19.597410 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 17:15:19.597416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 17:15:19.647613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 17:15:19.647820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 17:15:19.647642 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 17:15:19.647909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 17:15:19.648000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 17:15:19.648027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 17:15:19.650075 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:33Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.777329 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:33Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.783510 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.783540 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.783550 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.783564 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.783575 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:33Z","lastTransitionTime":"2026-02-02T17:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.790344 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52b7595-c2f6-4d7b-a076-db28e0332d49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d64509908882d8a14b1836f1c667e92c12f407f5108009c8a63dd841d034afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae002ad3b0e97ea0ba6f6c7a1bee332c5a1644dabbe1af5ccf1b9325356c77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe23a7d9f061976dbd948d705fa9d73bd8a5f9f01304c4d966fcc4501d84cac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:33Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.805632 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:33Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.817141 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b070dee1026575fc13174bc4542b19d26a65f78951e524e035b6ce8be3d21e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:33Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.826445 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6hxtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f96e711-13fa-4105-b042-45fe046d3d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca0363b669895519cf2438c9ea033dc0227b3f6fea175835586f9dc1252688c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlbtc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6hxtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:33Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.835048 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ct8b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76546fe8-0dad-45f2-aac1-2ec02ec40898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0dc2e96af3e8efe18dbe2ac824f5305bfbc9fe047645fd495f9a6ca568ae1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggp5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ct8b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:33Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.844772 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9szlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc7963e-1bdc-4038-805e-bd72fc217a13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59b9be9d88ddb7f0a4b7c29467c616216c8b14502941cc1d6e43a922d903567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9szlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:33Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.860753 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80911936318b3a9de14a9f71a7437f0ee6c9fdcb56c2f4efc568a5543a6c86e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3060eb52c9c081c561e2b97588ea0418eadc0e227307715ef71d48fb42bbf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lbvl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:33Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.877735 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce405d19-c944-4a11-8195-bca9289b8d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d9396f44f74445d0964a856afd2c936bd5d901265bcf8a214ddcbd748aa377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c1fd9e48c7bd51e6478bda59b8d1e21c80c6d7dedf38a3fe752a3fa08e26a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T17:15:31Z\\\",\\\"message\\\":\\\"lector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 17:15:30.910194 6163 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 17:15:30.910257 6163 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 17:15:30.910675 6163 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 17:15:30.910709 6163 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 17:15:30.910736 6163 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 17:15:30.910777 6163 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 17:15:30.910797 6163 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 17:15:30.910803 6163 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 17:15:30.910849 6163 factory.go:656] Stopping watch factory\\\\nI0202 17:15:30.910865 6163 ovnkube.go:599] Stopped ovnkube\\\\nI0202 17:15:30.910880 6163 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 17:15:30.910904 6163 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 17:15:30.910909 6163 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d9396f44f74445d0964a856afd2c936bd5d901265bcf8a214ddcbd748aa377\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T17:15:32Z\\\",\\\"message\\\":\\\"467779 6318 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 17:15:32.468318 6318 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0202 17:15:32.468530 6318 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0202 17:15:32.469060 6318 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 17:15:32.469125 6318 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 17:15:32.469175 6318 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 17:15:32.469233 6318 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 17:15:32.469334 6318 factory.go:656] Stopping watch factory\\\\nI0202 17:15:32.469374 6318 ovnkube.go:599] Stopped ovnkube\\\\nI0202 17:15:32.469403 6318 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 17:15:32.469344 6318 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 17:15:32.469438 6318 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 17:15:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wkm4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:33Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.886010 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.886080 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.886104 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.886134 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.886153 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:33Z","lastTransitionTime":"2026-02-02T17:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.901769 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6803a1fe-30ea-4c74-8a7f-743c3344d829\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925eeba4bab6bf2efe66b7e9bcc09451b9dd26cb3dc457fe4432b134ed68dfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://845157900bb10cd29ec008f561118df2cbfcc30684394dbc201ad8a966aa8e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e87a8afe124154c232c26f9a43969189b865225c0162799857847f10af1257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16072ce1c24706c267a0341779f05e661bac827ae871ecc732f81923bb63bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c692114482fe72631826c6af5eb14c5ef0fdfc5500ed0ea3fd96d6009d5be562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:33Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.916418 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://586c9f180c983beb2187348f2e68b1a8e550805c0f8035b72785f9845b00a14c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0629fa3631c6ec611ce9a9130532c9890bab8c4f4ca83042f9d4c8f1ec990690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:33Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.992551 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.992610 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.992628 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.992650 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:33 crc kubenswrapper[4858]: I0202 17:15:33.992666 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:33Z","lastTransitionTime":"2026-02-02T17:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.095198 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.095278 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.095296 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.095316 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.095331 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:34Z","lastTransitionTime":"2026-02-02T17:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.198244 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.198305 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.198323 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.198344 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.198360 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:34Z","lastTransitionTime":"2026-02-02T17:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.301679 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.301732 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.301751 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.301777 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.301795 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:34Z","lastTransitionTime":"2026-02-02T17:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.359021 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 16:33:53.927316993 +0000 UTC Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.400694 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.400780 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.400705 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:15:34 crc kubenswrapper[4858]: E0202 17:15:34.400925 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:15:34 crc kubenswrapper[4858]: E0202 17:15:34.401099 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:15:34 crc kubenswrapper[4858]: E0202 17:15:34.401287 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.405065 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.405116 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.405128 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.405146 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.405160 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:34Z","lastTransitionTime":"2026-02-02T17:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.508025 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.508086 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.508105 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.508132 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.508152 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:34Z","lastTransitionTime":"2026-02-02T17:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.611566 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.611634 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.611654 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.611684 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.611707 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:34Z","lastTransitionTime":"2026-02-02T17:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.635366 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-t8jfm"] Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.636072 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:15:34 crc kubenswrapper[4858]: E0202 17:15:34.636164 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.658813 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:34Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.678360 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52b7595-c2f6-4d7b-a076-db28e0332d49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d64509908882d8a14b1836f1c667e92c12f407f5108009c8a63dd841d034afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae002ad3b0e97ea0ba6f6c7a1bee332c5a1644dabbe1af5ccf1b9325356c77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe23a7d9f061976dbd948d705fa9d73bd8a5f9f01304c4d966fcc4501d84cac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:34Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.691388 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wkm4w_ce405d19-c944-4a11-8195-bca9289b8d73/ovnkube-controller/1.log" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.695439 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:34Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.697629 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhk49" event={"ID":"8c3c55b8-1193-47e3-ae7a-3a4b06df2884","Type":"ContainerStarted","Data":"7c9a3c7ca03f3b32cb484e20e016538920e05487b8c52cbe92c86e62ce30fe85"} Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.697860 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhk49" event={"ID":"8c3c55b8-1193-47e3-ae7a-3a4b06df2884","Type":"ContainerStarted","Data":"cd7f6cd899675ca4ccd2da634b9c4865505db0ac1ca0f2ee8e2c65516fd2c190"} Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.714555 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.714595 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.714609 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.714624 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.714635 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:34Z","lastTransitionTime":"2026-02-02T17:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.715764 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b070dee1026575fc13174bc4542b19d26a65f78951e524e035b6ce8be3d21e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:34Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.733234 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6hxtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f96e711-13fa-4105-b042-45fe046d3d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca0363b669895519cf2438c9ea033dc0227b3f6fea175835586f9dc1252688c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlbtc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6hxtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:34Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.750038 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ct8b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76546fe8-0dad-45f2-aac1-2ec02ec40898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0dc2e96af3e8efe18dbe2ac824f5305bfbc9fe047645fd495f9a6ca568ae1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggp5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ct8b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:34Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.750293 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clqcq\" (UniqueName: \"kubernetes.io/projected/8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122-kube-api-access-clqcq\") pod \"network-metrics-daemon-t8jfm\" (UID: \"8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122\") " pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.750352 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122-metrics-certs\") pod \"network-metrics-daemon-t8jfm\" (UID: \"8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122\") " pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.769801 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9szlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc7963e-1bdc-4038-805e-bd72fc217a13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59b9be9d88ddb7f0a4b7c29467c616216c8b14502941cc1d6e43a922d903567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9szlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:34Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.783450 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80911936318b3a9de14a9f71a7437f0ee6c9fdcb56c2f4efc568a5543a6c86e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3060eb52c9c081c561e2b97588ea0418eadc0e227307715ef71d48fb42bbf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lbvl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:34Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.808179 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce405d19-c944-4a11-8195-bca9289b8d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d9396f44f74445d0964a856afd2c936bd5d901265bcf8a214ddcbd748aa377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c1fd9e48c7bd51e6478bda59b8d1e21c80c6d7dedf38a3fe752a3fa08e26a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T17:15:31Z\\\",\\\"message\\\":\\\"lector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 17:15:30.910194 6163 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 17:15:30.910257 6163 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 17:15:30.910675 6163 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 17:15:30.910709 6163 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 17:15:30.910736 6163 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 17:15:30.910777 6163 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 17:15:30.910797 6163 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 17:15:30.910803 6163 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 17:15:30.910849 6163 factory.go:656] Stopping watch factory\\\\nI0202 17:15:30.910865 6163 ovnkube.go:599] Stopped ovnkube\\\\nI0202 17:15:30.910880 6163 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 17:15:30.910904 6163 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 17:15:30.910909 6163 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d9396f44f74445d0964a856afd2c936bd5d901265bcf8a214ddcbd748aa377\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T17:15:32Z\\\",\\\"message\\\":\\\"467779 6318 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 17:15:32.468318 6318 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0202 17:15:32.468530 6318 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0202 17:15:32.469060 6318 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 17:15:32.469125 6318 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 17:15:32.469175 6318 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 17:15:32.469233 6318 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 17:15:32.469334 6318 factory.go:656] Stopping watch factory\\\\nI0202 17:15:32.469374 6318 ovnkube.go:599] Stopped ovnkube\\\\nI0202 17:15:32.469403 6318 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 17:15:32.469344 6318 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 17:15:32.469438 6318 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 17:15:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wkm4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:34Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.816664 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.816703 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.816715 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.816732 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.816743 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:34Z","lastTransitionTime":"2026-02-02T17:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.828169 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6803a1fe-30ea-4c74-8a7f-743c3344d829\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925eeba4bab6bf2efe66b7e9bcc09451b9dd26cb3dc457fe4432b134ed68dfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://845157900bb10cd29ec008f561118df2cbfcc30684394dbc201ad8a966aa8e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e87a8afe124154c232c26f9a43969189b865225c0162799857847f10af1257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16072ce1c24706c267a0341779f05e661bac827ae871ecc732f81923bb63bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c692114482fe72631826c6af5eb14c5ef0fdfc5500ed0ea3fd96d6009d5be562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:34Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.843180 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://586c9f180c983beb2187348f2e68b1a8e550805c0f8035b72785f9845b00a14c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0629fa3631c6ec611ce9a9130532c9890bab8c4f4ca83042f9d4c8f1ec990690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:34Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.851859 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clqcq\" (UniqueName: \"kubernetes.io/projected/8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122-kube-api-access-clqcq\") pod \"network-metrics-daemon-t8jfm\" (UID: \"8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122\") " pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.851913 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122-metrics-certs\") pod \"network-metrics-daemon-t8jfm\" (UID: \"8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122\") " pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:15:34 crc kubenswrapper[4858]: E0202 17:15:34.852161 4858 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 17:15:34 crc kubenswrapper[4858]: E0202 17:15:34.852277 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122-metrics-certs podName:8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122 nodeName:}" failed. No retries permitted until 2026-02-02 17:15:35.352249802 +0000 UTC m=+36.504665097 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122-metrics-certs") pod "network-metrics-daemon-t8jfm" (UID: "8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.857093 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:34Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.871309 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhk49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3c55b8-1193-47e3-ae7a-3a4b06df2884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ksp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ksp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zhk49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:34Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.872326 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clqcq\" (UniqueName: \"kubernetes.io/projected/8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122-kube-api-access-clqcq\") pod \"network-metrics-daemon-t8jfm\" (UID: \"8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122\") " pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.883061 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-t8jfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clqcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clqcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-t8jfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:34Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.895039 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce5b4fa8ae1fbd371b3a75bf6a2b061749d1b158fbf24b60a0a6fbebeee5e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:34Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.907671 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580328d3eb1cbccbf31e6a18c6589dd3745f01a1451a9811a838f03a79ef444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:34Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.918383 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.918428 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.918439 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.918459 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.918473 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:34Z","lastTransitionTime":"2026-02-02T17:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.925892 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc5a816-d9d0-41c0-877c-250e077ef445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 17:15:14.099362 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 17:15:14.102453 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2083573188/tls.crt::/tmp/serving-cert-2083573188/tls.key\\\\\\\"\\\\nI0202 17:15:19.591422 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 17:15:19.597363 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 17:15:19.597387 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 17:15:19.597410 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 17:15:19.597416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 17:15:19.647613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 17:15:19.647820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 17:15:19.647642 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 17:15:19.647909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 17:15:19.648000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 17:15:19.648027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 17:15:19.650075 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:34Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.937957 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6hxtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f96e711-13fa-4105-b042-45fe046d3d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca0363b669895519cf2438c9ea033dc0227b3f6fea175835586f9dc1252688c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlbtc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6hxtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:34Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.952880 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ct8b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76546fe8-0dad-45f2-aac1-2ec02ec40898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0dc2e96af3e8efe18dbe2ac824f5305bfbc9fe047645fd495f9a6ca568ae1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggp5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ct8b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:34Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.973467 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9szlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc7963e-1bdc-4038-805e-bd72fc217a13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59b9be9d88ddb7f0a4b7c29467c616216c8b14502941cc1d6e43a922d903567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9szlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:34Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:34 crc kubenswrapper[4858]: I0202 17:15:34.992852 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80911936318b3a9de14a9f71a7437f0ee6c9fdcb56c2f4efc568a5543a6c86e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3060eb52c9c081c561e2b97588ea0418eadc0e227307715ef71d48fb42bbf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lbvl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:34Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.021019 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.021051 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.021060 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.021076 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.021087 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:35Z","lastTransitionTime":"2026-02-02T17:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.029420 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce405d19-c944-4a11-8195-bca9289b8d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d9396f44f74445d0964a856afd2c936bd5d901265bcf8a214ddcbd748aa377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c1fd9e48c7bd51e6478bda59b8d1e21c80c6d7dedf38a3fe752a3fa08e26a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T17:15:31Z\\\",\\\"message\\\":\\\"lector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 17:15:30.910194 6163 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 17:15:30.910257 6163 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 17:15:30.910675 6163 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 17:15:30.910709 6163 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 17:15:30.910736 6163 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 17:15:30.910777 6163 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 17:15:30.910797 6163 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 17:15:30.910803 6163 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 17:15:30.910849 6163 factory.go:656] Stopping watch factory\\\\nI0202 17:15:30.910865 6163 ovnkube.go:599] Stopped ovnkube\\\\nI0202 17:15:30.910880 6163 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 17:15:30.910904 6163 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 17:15:30.910909 6163 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d9396f44f74445d0964a856afd2c936bd5d901265bcf8a214ddcbd748aa377\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T17:15:32Z\\\",\\\"message\\\":\\\"467779 6318 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 17:15:32.468318 6318 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0202 17:15:32.468530 6318 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0202 17:15:32.469060 6318 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 17:15:32.469125 6318 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 17:15:32.469175 6318 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 17:15:32.469233 6318 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 17:15:32.469334 6318 factory.go:656] Stopping watch factory\\\\nI0202 17:15:32.469374 6318 ovnkube.go:599] Stopped ovnkube\\\\nI0202 17:15:32.469403 6318 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 17:15:32.469344 6318 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 17:15:32.469438 6318 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 17:15:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wkm4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:35Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.061521 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6803a1fe-30ea-4c74-8a7f-743c3344d829\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925eeba4bab6bf2efe66b7e9bcc09451b9dd26cb3dc457fe4432b134ed68dfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://845157900bb10cd29ec008f561118df2cbfcc30684394dbc201ad8a966aa8e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e87a8afe124154c232c26f9a43969189b865225c0162799857847f10af1257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16072ce1c24706c267a0341779f05e661bac827ae871ecc732f81923bb63bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c692114482fe72631826c6af5eb14c5ef0fdfc5500ed0ea3fd96d6009d5be562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:35Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.077934 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://586c9f180c983beb2187348f2e68b1a8e550805c0f8035b72785f9845b00a14c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0629fa3631c6ec611ce9a9130532c9890bab8c4f4ca83042f9d4c8f1ec990690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:35Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.094549 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b070dee1026575fc13174bc4542b19d26a65f78951e524e035b6ce8be3d21e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:35Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.107787 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhk49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3c55b8-1193-47e3-ae7a-3a4b06df2884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd7f6cd899675ca4ccd2da634b9c4865505db0ac1ca0f2ee8e2c65516fd2c190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ksp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c9a3c7ca03f3b32cb484e20e016538920e05487b8c52cbe92c86e62ce30fe85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ksp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zhk49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:35Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.119240 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-t8jfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clqcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clqcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-t8jfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:35Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.123746 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.123799 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.123816 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.123846 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.123869 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:35Z","lastTransitionTime":"2026-02-02T17:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.136137 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce5b4fa8ae1fbd371b3a75bf6a2b061749d1b158fbf24b60a0a6fbebeee5e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:35Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.156837 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:35Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.178715 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc5a816-d9d0-41c0-877c-250e077ef445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 17:15:14.099362 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 17:15:14.102453 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2083573188/tls.crt::/tmp/serving-cert-2083573188/tls.key\\\\\\\"\\\\nI0202 17:15:19.591422 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 17:15:19.597363 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 17:15:19.597387 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 17:15:19.597410 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 17:15:19.597416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 17:15:19.647613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 17:15:19.647820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 17:15:19.647642 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 17:15:19.647909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 17:15:19.648000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 17:15:19.648027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 17:15:19.650075 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:35Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.200242 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580328d3eb1cbccbf31e6a18c6589dd3745f01a1451a9811a838f03a79ef444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:35Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.217460 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52b7595-c2f6-4d7b-a076-db28e0332d49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d64509908882d8a14b1836f1c667e92c12f407f5108009c8a63dd841d034afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae002ad3b0e97ea0ba6f6c7a1bee332c5a1644dabbe1af5ccf1b9325356c77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe23a7d9f061976dbd948d705fa9d73bd8a5f9f01304c4d966fcc4501d84cac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:35Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.226254 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.226292 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.226304 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.226322 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.226334 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:35Z","lastTransitionTime":"2026-02-02T17:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.234126 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:35Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.256896 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:35Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.257856 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.276209 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc5a816-d9d0-41c0-877c-250e077ef445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 17:15:14.099362 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 17:15:14.102453 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2083573188/tls.crt::/tmp/serving-cert-2083573188/tls.key\\\\\\\"\\\\nI0202 17:15:19.591422 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 17:15:19.597363 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 17:15:19.597387 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 17:15:19.597410 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 17:15:19.597416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 17:15:19.647613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 17:15:19.647820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 17:15:19.647642 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 17:15:19.647909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 17:15:19.648000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 17:15:19.648027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 17:15:19.650075 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:35Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.300177 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580328d3eb1cbccbf31e6a18c6589dd3745f01a1451a9811a838f03a79ef444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:35Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.315835 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52b7595-c2f6-4d7b-a076-db28e0332d49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d64509908882d8a14b1836f1c667e92c12f407f5108009c8a63dd841d034afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae002ad3b0e97ea0ba6f6c7a1bee332c5a1644dabbe1af5ccf1b9325356c77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe23a7d9f061976dbd948d705fa9d73bd8a5f9f01304c4d966fcc4501d84cac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:35Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.329534 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.329616 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.329638 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.329668 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.329694 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:35Z","lastTransitionTime":"2026-02-02T17:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.335268 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:35Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.352719 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:35Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.357300 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122-metrics-certs\") pod \"network-metrics-daemon-t8jfm\" (UID: \"8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122\") " pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:15:35 crc kubenswrapper[4858]: E0202 17:15:35.357518 4858 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 17:15:35 crc kubenswrapper[4858]: E0202 17:15:35.357649 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122-metrics-certs podName:8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122 nodeName:}" failed. No retries permitted until 2026-02-02 17:15:36.357620782 +0000 UTC m=+37.510036087 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122-metrics-certs") pod "network-metrics-daemon-t8jfm" (UID: "8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.359169 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 19:05:10.992125947 +0000 UTC Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.364411 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6hxtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f96e711-13fa-4105-b042-45fe046d3d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca0363b669895519cf2438c9ea033dc0227b3f6fea175835586f9dc1252688c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlbtc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6hxtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:35Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.378062 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ct8b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76546fe8-0dad-45f2-aac1-2ec02ec40898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0dc2e96af3e8efe18dbe2ac824f5305bfbc9fe047645fd495f9a6ca568ae1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggp5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ct8b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:35Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.393895 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9szlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc7963e-1bdc-4038-805e-bd72fc217a13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59b9be9d88ddb7f0a4b7c29467c616216c8b14502941cc1d6e43a922d903567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9szlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:35Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.407026 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80911936318b3a9de14a9f71a7437f0ee6c9fdcb56c2f4efc568a5543a6c86e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3060eb52c9c081c561e2b97588ea0418eadc0e227307715ef71d48fb42bbf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lbvl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:35Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.428667 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce405d19-c944-4a11-8195-bca9289b8d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d9396f44f74445d0964a856afd2c936bd5d901265bcf8a214ddcbd748aa377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c1fd9e48c7bd51e6478bda59b8d1e21c80c6d7dedf38a3fe752a3fa08e26a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T17:15:31Z\\\",\\\"message\\\":\\\"lector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 17:15:30.910194 6163 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 17:15:30.910257 6163 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 17:15:30.910675 6163 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 17:15:30.910709 6163 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 17:15:30.910736 6163 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 17:15:30.910777 6163 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 17:15:30.910797 6163 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 17:15:30.910803 6163 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 17:15:30.910849 6163 factory.go:656] Stopping watch factory\\\\nI0202 17:15:30.910865 6163 ovnkube.go:599] Stopped ovnkube\\\\nI0202 17:15:30.910880 6163 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 17:15:30.910904 6163 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 17:15:30.910909 6163 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d9396f44f74445d0964a856afd2c936bd5d901265bcf8a214ddcbd748aa377\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T17:15:32Z\\\",\\\"message\\\":\\\"467779 6318 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 17:15:32.468318 6318 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0202 17:15:32.468530 6318 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0202 17:15:32.469060 6318 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 17:15:32.469125 6318 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 17:15:32.469175 6318 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 17:15:32.469233 6318 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 17:15:32.469334 6318 factory.go:656] Stopping watch factory\\\\nI0202 17:15:32.469374 6318 ovnkube.go:599] Stopped ovnkube\\\\nI0202 17:15:32.469403 6318 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 17:15:32.469344 6318 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 17:15:32.469438 6318 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 17:15:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wkm4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:35Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.432767 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.432834 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.432852 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.432877 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.432897 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:35Z","lastTransitionTime":"2026-02-02T17:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.457815 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6803a1fe-30ea-4c74-8a7f-743c3344d829\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925eeba4bab6bf2efe66b7e9bcc09451b9dd26cb3dc457fe4432b134ed68dfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://845157900bb10cd29ec008f561118df2cbfcc30684394dbc201ad8a966aa8e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e87a8afe124154c232c26f9a43969189b865225c0162799857847f10af1257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16072ce1c24706c267a0341779f05e661bac827ae871ecc732f81923bb63bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c692114482fe72631826c6af5eb14c5ef0fdfc5500ed0ea3fd96d6009d5be562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:35Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.476761 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://586c9f180c983beb2187348f2e68b1a8e550805c0f8035b72785f9845b00a14c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0629fa3631c6ec611ce9a9130532c9890bab8c4f4ca83042f9d4c8f1ec990690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:35Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.491959 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b070dee1026575fc13174bc4542b19d26a65f78951e524e035b6ce8be3d21e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:35Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.510100 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhk49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3c55b8-1193-47e3-ae7a-3a4b06df2884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd7f6cd899675ca4ccd2da634b9c4865505db0ac1ca0f2ee8e2c65516fd2c190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ksp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c9a3c7ca03f3b32cb484e20e016538920e05487b8c52cbe92c86e62ce30fe85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ksp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zhk49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:35Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.527221 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-t8jfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clqcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clqcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-t8jfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:35Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.536199 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.536294 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.536317 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.536355 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.536377 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:35Z","lastTransitionTime":"2026-02-02T17:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.549389 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce5b4fa8ae1fbd371b3a75bf6a2b061749d1b158fbf24b60a0a6fbebeee5e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:35Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.573294 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:35Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.640351 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.640410 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.640427 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.640451 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.640471 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:35Z","lastTransitionTime":"2026-02-02T17:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.743959 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.744041 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.744060 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.744086 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.744103 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:35Z","lastTransitionTime":"2026-02-02T17:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.847861 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.847918 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.847936 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.847961 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.848011 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:35Z","lastTransitionTime":"2026-02-02T17:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.951101 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.951157 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.951170 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.951189 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.951203 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:35Z","lastTransitionTime":"2026-02-02T17:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.988669 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.988730 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.988749 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.988779 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:35 crc kubenswrapper[4858]: I0202 17:15:35.988801 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:35Z","lastTransitionTime":"2026-02-02T17:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:36 crc kubenswrapper[4858]: E0202 17:15:36.021560 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b1513e64-30b8-48d2-874a-29d4cc9d3b3d\\\",\\\"systemUUID\\\":\\\"152e49e6-c863-4d14-b212-d4d9f0b62e1a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:36Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.029572 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.029618 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.029640 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.029659 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.029671 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:36Z","lastTransitionTime":"2026-02-02T17:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:36 crc kubenswrapper[4858]: E0202 17:15:36.064824 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b1513e64-30b8-48d2-874a-29d4cc9d3b3d\\\",\\\"systemUUID\\\":\\\"152e49e6-c863-4d14-b212-d4d9f0b62e1a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:36Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.072346 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.072401 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.072418 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.072450 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.072463 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:36Z","lastTransitionTime":"2026-02-02T17:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:36 crc kubenswrapper[4858]: E0202 17:15:36.091063 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b1513e64-30b8-48d2-874a-29d4cc9d3b3d\\\",\\\"systemUUID\\\":\\\"152e49e6-c863-4d14-b212-d4d9f0b62e1a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:36Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.094833 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.094889 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.094898 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.094910 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.094919 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:36Z","lastTransitionTime":"2026-02-02T17:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:36 crc kubenswrapper[4858]: E0202 17:15:36.109086 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b1513e64-30b8-48d2-874a-29d4cc9d3b3d\\\",\\\"systemUUID\\\":\\\"152e49e6-c863-4d14-b212-d4d9f0b62e1a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:36Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.112714 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.112773 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.112787 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.112805 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.112817 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:36Z","lastTransitionTime":"2026-02-02T17:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:36 crc kubenswrapper[4858]: E0202 17:15:36.126934 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b1513e64-30b8-48d2-874a-29d4cc9d3b3d\\\",\\\"systemUUID\\\":\\\"152e49e6-c863-4d14-b212-d4d9f0b62e1a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:36Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:36 crc kubenswrapper[4858]: E0202 17:15:36.127107 4858 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.128698 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.128750 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.128763 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.128781 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.128796 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:36Z","lastTransitionTime":"2026-02-02T17:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.165181 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.165316 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.165350 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.165374 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:15:36 crc kubenswrapper[4858]: E0202 17:15:36.165477 4858 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 17:15:36 crc kubenswrapper[4858]: E0202 17:15:36.165501 4858 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 17:15:36 crc kubenswrapper[4858]: E0202 17:15:36.165530 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 17:15:52.165517156 +0000 UTC m=+53.317932421 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 17:15:36 crc kubenswrapper[4858]: E0202 17:15:36.165545 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:15:52.165539346 +0000 UTC m=+53.317954611 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:15:36 crc kubenswrapper[4858]: E0202 17:15:36.165556 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 17:15:52.165551437 +0000 UTC m=+53.317966702 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 17:15:36 crc kubenswrapper[4858]: E0202 17:15:36.165585 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 17:15:36 crc kubenswrapper[4858]: E0202 17:15:36.165624 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 17:15:36 crc kubenswrapper[4858]: E0202 17:15:36.165642 4858 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 17:15:36 crc kubenswrapper[4858]: E0202 17:15:36.165715 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 17:15:52.165690101 +0000 UTC m=+53.318105436 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.232468 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.232573 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.232596 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.232655 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.232675 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:36Z","lastTransitionTime":"2026-02-02T17:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.266416 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:15:36 crc kubenswrapper[4858]: E0202 17:15:36.266690 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 17:15:36 crc kubenswrapper[4858]: E0202 17:15:36.266743 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 17:15:36 crc kubenswrapper[4858]: E0202 17:15:36.266765 4858 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 17:15:36 crc kubenswrapper[4858]: E0202 17:15:36.266851 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 17:15:52.26682453 +0000 UTC m=+53.419239835 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.335897 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.335949 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.336036 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.336066 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.336114 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:36Z","lastTransitionTime":"2026-02-02T17:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.359914 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 17:14:23.912891338 +0000 UTC Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.367528 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122-metrics-certs\") pod \"network-metrics-daemon-t8jfm\" (UID: \"8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122\") " pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:15:36 crc kubenswrapper[4858]: E0202 17:15:36.367696 4858 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 17:15:36 crc kubenswrapper[4858]: E0202 17:15:36.367864 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122-metrics-certs podName:8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122 nodeName:}" failed. No retries permitted until 2026-02-02 17:15:38.367841906 +0000 UTC m=+39.520257201 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122-metrics-certs") pod "network-metrics-daemon-t8jfm" (UID: "8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.399641 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.399712 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:15:36 crc kubenswrapper[4858]: E0202 17:15:36.399824 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.399841 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.399947 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:15:36 crc kubenswrapper[4858]: E0202 17:15:36.400157 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:15:36 crc kubenswrapper[4858]: E0202 17:15:36.400322 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:15:36 crc kubenswrapper[4858]: E0202 17:15:36.400455 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.440681 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.441227 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.441249 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.441271 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.441289 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:36Z","lastTransitionTime":"2026-02-02T17:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.544268 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.544382 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.544406 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.544438 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.544457 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:36Z","lastTransitionTime":"2026-02-02T17:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.648210 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.648266 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.648282 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.648310 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.648328 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:36Z","lastTransitionTime":"2026-02-02T17:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.751035 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.751092 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.751109 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.751135 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.751152 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:36Z","lastTransitionTime":"2026-02-02T17:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.854924 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.855057 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.855089 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.855121 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.855140 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:36Z","lastTransitionTime":"2026-02-02T17:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.958258 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.958333 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.958350 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.958375 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:36 crc kubenswrapper[4858]: I0202 17:15:36.958392 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:36Z","lastTransitionTime":"2026-02-02T17:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.061196 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.061278 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.061306 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.061335 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.061356 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:37Z","lastTransitionTime":"2026-02-02T17:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.164638 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.164692 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.164710 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.164734 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.164753 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:37Z","lastTransitionTime":"2026-02-02T17:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.268189 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.268239 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.268255 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.268279 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.268297 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:37Z","lastTransitionTime":"2026-02-02T17:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.360131 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 08:18:41.634618939 +0000 UTC Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.371467 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.371513 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.371524 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.371541 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.371554 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:37Z","lastTransitionTime":"2026-02-02T17:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.475506 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.475561 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.475578 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.475602 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.475620 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:37Z","lastTransitionTime":"2026-02-02T17:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.579100 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.579496 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.579691 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.579803 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.579900 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:37Z","lastTransitionTime":"2026-02-02T17:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.683362 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.683470 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.683498 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.683528 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.683550 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:37Z","lastTransitionTime":"2026-02-02T17:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.786210 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.786274 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.786294 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.786323 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.786342 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:37Z","lastTransitionTime":"2026-02-02T17:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.889381 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.889447 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.889465 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.889490 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.889508 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:37Z","lastTransitionTime":"2026-02-02T17:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.992357 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.992415 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.992432 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.992459 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:37 crc kubenswrapper[4858]: I0202 17:15:37.992477 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:37Z","lastTransitionTime":"2026-02-02T17:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.095423 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.095482 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.095500 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.095526 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.095545 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:38Z","lastTransitionTime":"2026-02-02T17:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.198658 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.198734 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.198759 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.198787 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.198803 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:38Z","lastTransitionTime":"2026-02-02T17:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.301664 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.302085 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.302291 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.302493 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.302717 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:38Z","lastTransitionTime":"2026-02-02T17:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.360705 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 18:49:57.23833164 +0000 UTC Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.392662 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122-metrics-certs\") pod \"network-metrics-daemon-t8jfm\" (UID: \"8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122\") " pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:15:38 crc kubenswrapper[4858]: E0202 17:15:38.392924 4858 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 17:15:38 crc kubenswrapper[4858]: E0202 17:15:38.393086 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122-metrics-certs podName:8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122 nodeName:}" failed. No retries permitted until 2026-02-02 17:15:42.393054489 +0000 UTC m=+43.545469794 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122-metrics-certs") pod "network-metrics-daemon-t8jfm" (UID: "8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.399929 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.400027 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.400065 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.399940 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:15:38 crc kubenswrapper[4858]: E0202 17:15:38.400189 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:15:38 crc kubenswrapper[4858]: E0202 17:15:38.400333 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:15:38 crc kubenswrapper[4858]: E0202 17:15:38.400400 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:15:38 crc kubenswrapper[4858]: E0202 17:15:38.400485 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.406367 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.406426 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.406449 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.406479 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.406500 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:38Z","lastTransitionTime":"2026-02-02T17:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.509296 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.509358 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.509375 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.509400 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.509417 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:38Z","lastTransitionTime":"2026-02-02T17:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.612360 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.612422 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.612439 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.612462 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.612478 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:38Z","lastTransitionTime":"2026-02-02T17:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.714897 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.715027 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.715102 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.715134 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.715155 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:38Z","lastTransitionTime":"2026-02-02T17:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.818135 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.818232 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.818255 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.818279 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.818295 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:38Z","lastTransitionTime":"2026-02-02T17:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.921511 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.921818 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.921921 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.922041 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:38 crc kubenswrapper[4858]: I0202 17:15:38.922135 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:38Z","lastTransitionTime":"2026-02-02T17:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.024781 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.025083 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.025215 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.025328 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.025426 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:39Z","lastTransitionTime":"2026-02-02T17:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.129361 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.129413 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.129435 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.129459 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.129480 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:39Z","lastTransitionTime":"2026-02-02T17:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.232827 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.232919 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.232942 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.232966 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.233009 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:39Z","lastTransitionTime":"2026-02-02T17:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.336411 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.336451 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.336481 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.336496 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.336510 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:39Z","lastTransitionTime":"2026-02-02T17:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.361163 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 19:50:51.160703841 +0000 UTC Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.439544 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.439756 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.439799 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.439831 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.439854 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:39Z","lastTransitionTime":"2026-02-02T17:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.542831 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.542919 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.542943 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.543007 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.543027 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:39Z","lastTransitionTime":"2026-02-02T17:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.646059 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.646143 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.646160 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.646183 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.646199 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:39Z","lastTransitionTime":"2026-02-02T17:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.749027 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.749096 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.749123 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.749154 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.749178 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:39Z","lastTransitionTime":"2026-02-02T17:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.853002 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.853041 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.853055 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.853072 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.853085 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:39Z","lastTransitionTime":"2026-02-02T17:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.956209 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.956295 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.956322 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.956352 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:39 crc kubenswrapper[4858]: I0202 17:15:39.956371 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:39Z","lastTransitionTime":"2026-02-02T17:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.058748 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.058826 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.058859 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.058890 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.058910 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:40Z","lastTransitionTime":"2026-02-02T17:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.162263 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.162400 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.162432 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.162467 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.162493 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:40Z","lastTransitionTime":"2026-02-02T17:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.266928 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.267065 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.267098 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.267128 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.267156 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:40Z","lastTransitionTime":"2026-02-02T17:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.361802 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 20:53:36.053138681 +0000 UTC Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.370407 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.370477 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.370494 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.370525 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.370548 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:40Z","lastTransitionTime":"2026-02-02T17:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.400278 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.400319 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.400370 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:15:40 crc kubenswrapper[4858]: E0202 17:15:40.400488 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.400956 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:15:40 crc kubenswrapper[4858]: E0202 17:15:40.401172 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:15:40 crc kubenswrapper[4858]: E0202 17:15:40.401062 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:15:40 crc kubenswrapper[4858]: E0202 17:15:40.401503 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.428200 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc5a816-d9d0-41c0-877c-250e077ef445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 17:15:14.099362 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 17:15:14.102453 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2083573188/tls.crt::/tmp/serving-cert-2083573188/tls.key\\\\\\\"\\\\nI0202 17:15:19.591422 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 17:15:19.597363 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 17:15:19.597387 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 17:15:19.597410 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 17:15:19.597416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 17:15:19.647613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 17:15:19.647820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 17:15:19.647642 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 17:15:19.647909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 17:15:19.648000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 17:15:19.648027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 17:15:19.650075 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:40Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.452302 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580328d3eb1cbccbf31e6a18c6589dd3745f01a1451a9811a838f03a79ef444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:40Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.472361 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52b7595-c2f6-4d7b-a076-db28e0332d49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d64509908882d8a14b1836f1c667e92c12f407f5108009c8a63dd841d034afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae002ad3b0e97ea0ba6f6c7a1bee332c5a1644dabbe1af5ccf1b9325356c77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe23a7d9f061976dbd948d705fa9d73bd8a5f9f01304c4d966fcc4501d84cac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:40Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.473442 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.473512 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.473537 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.473562 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.473581 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:40Z","lastTransitionTime":"2026-02-02T17:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.491307 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:40Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.513173 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:40Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.527952 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9szlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc7963e-1bdc-4038-805e-bd72fc217a13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59b9be9d88ddb7f0a4b7c29467c616216c8b14502941cc1d6e43a922d903567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9szlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:40Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.542746 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80911936318b3a9de14a9f71a7437f0ee6c9fdcb56c2f4efc568a5543a6c86e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3060eb52c9c081c561e2b97588ea0418eadc0e227307715ef71d48fb42bbf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lbvl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:40Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.573216 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce405d19-c944-4a11-8195-bca9289b8d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d9396f44f74445d0964a856afd2c936bd5d901265bcf8a214ddcbd748aa377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c1fd9e48c7bd51e6478bda59b8d1e21c80c6d7dedf38a3fe752a3fa08e26a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T17:15:31Z\\\",\\\"message\\\":\\\"lector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 17:15:30.910194 6163 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 17:15:30.910257 6163 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 17:15:30.910675 6163 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 17:15:30.910709 6163 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 17:15:30.910736 6163 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 17:15:30.910777 6163 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 17:15:30.910797 6163 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 17:15:30.910803 6163 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 17:15:30.910849 6163 factory.go:656] Stopping watch factory\\\\nI0202 17:15:30.910865 6163 ovnkube.go:599] Stopped ovnkube\\\\nI0202 17:15:30.910880 6163 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 17:15:30.910904 6163 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 17:15:30.910909 6163 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d9396f44f74445d0964a856afd2c936bd5d901265bcf8a214ddcbd748aa377\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T17:15:32Z\\\",\\\"message\\\":\\\"467779 6318 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 17:15:32.468318 6318 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0202 17:15:32.468530 6318 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0202 17:15:32.469060 6318 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 17:15:32.469125 6318 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 17:15:32.469175 6318 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 17:15:32.469233 6318 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 17:15:32.469334 6318 factory.go:656] Stopping watch factory\\\\nI0202 17:15:32.469374 6318 ovnkube.go:599] Stopped ovnkube\\\\nI0202 17:15:32.469403 6318 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 17:15:32.469344 6318 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 17:15:32.469438 6318 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 17:15:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wkm4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:40Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.575890 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.575926 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.575940 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.575958 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.575996 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:40Z","lastTransitionTime":"2026-02-02T17:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.596165 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6803a1fe-30ea-4c74-8a7f-743c3344d829\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925eeba4bab6bf2efe66b7e9bcc09451b9dd26cb3dc457fe4432b134ed68dfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://845157900bb10cd29ec008f561118df2cbfcc30684394dbc201ad8a966aa8e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e87a8afe124154c232c26f9a43969189b865225c0162799857847f10af1257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16072ce1c24706c267a0341779f05e661bac827ae871ecc732f81923bb63bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c692114482fe72631826c6af5eb14c5ef0fdfc5500ed0ea3fd96d6009d5be562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:40Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.614841 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://586c9f180c983beb2187348f2e68b1a8e550805c0f8035b72785f9845b00a14c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0629fa3631c6ec611ce9a9130532c9890bab8c4f4ca83042f9d4c8f1ec990690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:40Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.630862 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b070dee1026575fc13174bc4542b19d26a65f78951e524e035b6ce8be3d21e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:40Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.643354 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6hxtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f96e711-13fa-4105-b042-45fe046d3d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca0363b669895519cf2438c9ea033dc0227b3f6fea175835586f9dc1252688c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlbtc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6hxtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:40Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.656297 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ct8b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76546fe8-0dad-45f2-aac1-2ec02ec40898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0dc2e96af3e8efe18dbe2ac824f5305bfbc9fe047645fd495f9a6ca568ae1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggp5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ct8b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:40Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.672764 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce5b4fa8ae1fbd371b3a75bf6a2b061749d1b158fbf24b60a0a6fbebeee5e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:40Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.678852 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.678890 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.678902 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.678919 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.678930 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:40Z","lastTransitionTime":"2026-02-02T17:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.689081 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:40Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.708635 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhk49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3c55b8-1193-47e3-ae7a-3a4b06df2884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd7f6cd899675ca4ccd2da634b9c4865505db0ac1ca0f2ee8e2c65516fd2c190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ksp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c9a3c7ca03f3b32cb484e20e016538920e05487b8c52cbe92c86e62ce30fe85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ksp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zhk49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:40Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.728096 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-t8jfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clqcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clqcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-t8jfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:40Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.781746 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.781843 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.781868 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.781901 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.781920 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:40Z","lastTransitionTime":"2026-02-02T17:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.886033 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.886083 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.886095 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.886113 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:40 crc kubenswrapper[4858]: I0202 17:15:40.886126 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:40Z","lastTransitionTime":"2026-02-02T17:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.007068 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.007465 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.007601 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.007770 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.007907 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:41Z","lastTransitionTime":"2026-02-02T17:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.110664 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.111237 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.111646 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.111955 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.112395 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:41Z","lastTransitionTime":"2026-02-02T17:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.215661 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.215721 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.215736 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.215760 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.215778 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:41Z","lastTransitionTime":"2026-02-02T17:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.318598 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.318673 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.318698 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.318731 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.318754 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:41Z","lastTransitionTime":"2026-02-02T17:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.362676 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 09:09:35.457149859 +0000 UTC Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.421815 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.421900 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.421925 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.421954 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.422039 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:41Z","lastTransitionTime":"2026-02-02T17:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.524435 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.524458 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.524466 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.524478 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.524486 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:41Z","lastTransitionTime":"2026-02-02T17:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.627397 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.627436 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.627444 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.627457 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.627467 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:41Z","lastTransitionTime":"2026-02-02T17:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.730457 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.730543 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.730567 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.730597 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.730620 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:41Z","lastTransitionTime":"2026-02-02T17:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.834271 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.834335 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.834353 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.834379 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.834397 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:41Z","lastTransitionTime":"2026-02-02T17:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.937947 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.938079 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.938106 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.938139 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:41 crc kubenswrapper[4858]: I0202 17:15:41.938162 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:41Z","lastTransitionTime":"2026-02-02T17:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.041146 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.041196 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.041221 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.041261 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.041293 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:42Z","lastTransitionTime":"2026-02-02T17:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.144292 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.144386 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.144411 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.144441 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.144464 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:42Z","lastTransitionTime":"2026-02-02T17:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.247504 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.247628 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.247665 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.247696 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.247718 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:42Z","lastTransitionTime":"2026-02-02T17:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.350412 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.350489 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.350515 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.350543 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.350564 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:42Z","lastTransitionTime":"2026-02-02T17:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.363078 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 09:41:23.17983739 +0000 UTC Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.399764 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.399821 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.399876 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:15:42 crc kubenswrapper[4858]: E0202 17:15:42.399966 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.400108 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:15:42 crc kubenswrapper[4858]: E0202 17:15:42.400206 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:15:42 crc kubenswrapper[4858]: E0202 17:15:42.400280 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:15:42 crc kubenswrapper[4858]: E0202 17:15:42.400412 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.437417 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122-metrics-certs\") pod \"network-metrics-daemon-t8jfm\" (UID: \"8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122\") " pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:15:42 crc kubenswrapper[4858]: E0202 17:15:42.437603 4858 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 17:15:42 crc kubenswrapper[4858]: E0202 17:15:42.437719 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122-metrics-certs podName:8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122 nodeName:}" failed. No retries permitted until 2026-02-02 17:15:50.437689192 +0000 UTC m=+51.590104517 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122-metrics-certs") pod "network-metrics-daemon-t8jfm" (UID: "8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.454557 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.454622 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.454649 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.454678 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.454700 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:42Z","lastTransitionTime":"2026-02-02T17:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.558105 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.558177 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.558211 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.558240 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.558261 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:42Z","lastTransitionTime":"2026-02-02T17:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.662251 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.662312 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.662329 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.662351 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.662365 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:42Z","lastTransitionTime":"2026-02-02T17:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.765733 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.765800 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.765820 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.765848 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.765870 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:42Z","lastTransitionTime":"2026-02-02T17:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.869097 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.869151 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.869160 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.869174 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.869185 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:42Z","lastTransitionTime":"2026-02-02T17:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.972296 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.972366 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.972389 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.972417 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:42 crc kubenswrapper[4858]: I0202 17:15:42.972438 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:42Z","lastTransitionTime":"2026-02-02T17:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.075411 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.075480 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.075503 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.075532 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.075556 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:43Z","lastTransitionTime":"2026-02-02T17:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.178683 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.178758 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.178847 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.178877 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.178898 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:43Z","lastTransitionTime":"2026-02-02T17:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.281434 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.281501 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.281518 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.281541 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.281560 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:43Z","lastTransitionTime":"2026-02-02T17:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.363774 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 11:32:14.627587665 +0000 UTC Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.384358 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.384429 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.384452 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.384480 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.384503 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:43Z","lastTransitionTime":"2026-02-02T17:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.487800 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.487874 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.487891 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.487915 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.487931 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:43Z","lastTransitionTime":"2026-02-02T17:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.591262 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.591331 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.591348 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.591374 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.591392 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:43Z","lastTransitionTime":"2026-02-02T17:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.693696 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.693745 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.693754 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.693769 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.693780 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:43Z","lastTransitionTime":"2026-02-02T17:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.796153 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.796218 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.796234 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.796256 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.796277 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:43Z","lastTransitionTime":"2026-02-02T17:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.899464 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.899540 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.899562 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.899591 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:43 crc kubenswrapper[4858]: I0202 17:15:43.899612 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:43Z","lastTransitionTime":"2026-02-02T17:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.002516 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.002605 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.002628 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.002655 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.002674 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:44Z","lastTransitionTime":"2026-02-02T17:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.105730 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.105796 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.105820 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.105848 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.105870 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:44Z","lastTransitionTime":"2026-02-02T17:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.208161 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.208480 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.208670 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.208810 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.208961 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:44Z","lastTransitionTime":"2026-02-02T17:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.312673 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.312732 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.312748 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.312784 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.312796 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:44Z","lastTransitionTime":"2026-02-02T17:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.364095 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 13:09:29.788887122 +0000 UTC Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.400239 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.400382 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:15:44 crc kubenswrapper[4858]: E0202 17:15:44.400536 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.400950 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:15:44 crc kubenswrapper[4858]: E0202 17:15:44.401083 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:15:44 crc kubenswrapper[4858]: E0202 17:15:44.401147 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.401493 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:15:44 crc kubenswrapper[4858]: E0202 17:15:44.401880 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.420325 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.420775 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.421023 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.421226 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.421430 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:44Z","lastTransitionTime":"2026-02-02T17:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.525033 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.525103 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.525127 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.525157 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.525178 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:44Z","lastTransitionTime":"2026-02-02T17:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.627557 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.628450 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.628589 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.628771 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.628958 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:44Z","lastTransitionTime":"2026-02-02T17:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.731956 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.732104 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.732132 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.732164 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.732217 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:44Z","lastTransitionTime":"2026-02-02T17:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.835225 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.835267 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.835298 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.835317 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.835328 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:44Z","lastTransitionTime":"2026-02-02T17:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.938674 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.938749 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.938774 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.938808 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:44 crc kubenswrapper[4858]: I0202 17:15:44.938832 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:44Z","lastTransitionTime":"2026-02-02T17:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.041794 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.041864 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.041882 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.041906 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.041920 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:45Z","lastTransitionTime":"2026-02-02T17:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.144104 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.144171 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.144189 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.144215 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.144234 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:45Z","lastTransitionTime":"2026-02-02T17:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.247166 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.247279 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.247304 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.247335 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.247358 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:45Z","lastTransitionTime":"2026-02-02T17:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.350191 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.350241 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.350258 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.350286 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.350304 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:45Z","lastTransitionTime":"2026-02-02T17:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.364937 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 09:28:55.790326909 +0000 UTC Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.453287 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.453390 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.453408 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.453432 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.453449 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:45Z","lastTransitionTime":"2026-02-02T17:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.557033 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.557083 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.557097 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.557121 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.557134 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:45Z","lastTransitionTime":"2026-02-02T17:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.660047 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.660110 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.660126 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.660149 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.660166 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:45Z","lastTransitionTime":"2026-02-02T17:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.762698 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.762770 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.762793 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.762821 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.762839 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:45Z","lastTransitionTime":"2026-02-02T17:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.865424 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.865491 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.865514 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.865539 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.865556 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:45Z","lastTransitionTime":"2026-02-02T17:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.968625 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.968681 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.968698 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.968720 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:45 crc kubenswrapper[4858]: I0202 17:15:45.968737 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:45Z","lastTransitionTime":"2026-02-02T17:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.071270 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.071625 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.071806 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.071962 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.072183 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:46Z","lastTransitionTime":"2026-02-02T17:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.174645 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.174683 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.174695 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.174712 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.174723 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:46Z","lastTransitionTime":"2026-02-02T17:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.228401 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.228650 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.228796 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.229019 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.229200 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:46Z","lastTransitionTime":"2026-02-02T17:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:46 crc kubenswrapper[4858]: E0202 17:15:46.249444 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b1513e64-30b8-48d2-874a-29d4cc9d3b3d\\\",\\\"systemUUID\\\":\\\"152e49e6-c863-4d14-b212-d4d9f0b62e1a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:46Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.253759 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.254024 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.254156 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.254317 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.254494 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:46Z","lastTransitionTime":"2026-02-02T17:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:46 crc kubenswrapper[4858]: E0202 17:15:46.272851 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b1513e64-30b8-48d2-874a-29d4cc9d3b3d\\\",\\\"systemUUID\\\":\\\"152e49e6-c863-4d14-b212-d4d9f0b62e1a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:46Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.277162 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.277218 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.277239 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.277267 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.277288 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:46Z","lastTransitionTime":"2026-02-02T17:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:46 crc kubenswrapper[4858]: E0202 17:15:46.293642 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b1513e64-30b8-48d2-874a-29d4cc9d3b3d\\\",\\\"systemUUID\\\":\\\"152e49e6-c863-4d14-b212-d4d9f0b62e1a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:46Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.297630 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.297713 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.297736 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.297769 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.297791 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:46Z","lastTransitionTime":"2026-02-02T17:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:46 crc kubenswrapper[4858]: E0202 17:15:46.319906 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b1513e64-30b8-48d2-874a-29d4cc9d3b3d\\\",\\\"systemUUID\\\":\\\"152e49e6-c863-4d14-b212-d4d9f0b62e1a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:46Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.324378 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.324433 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.324457 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.324488 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.324507 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:46Z","lastTransitionTime":"2026-02-02T17:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:46 crc kubenswrapper[4858]: E0202 17:15:46.343166 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b1513e64-30b8-48d2-874a-29d4cc9d3b3d\\\",\\\"systemUUID\\\":\\\"152e49e6-c863-4d14-b212-d4d9f0b62e1a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:46Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:46 crc kubenswrapper[4858]: E0202 17:15:46.343330 4858 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.345553 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.345586 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.345597 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.345672 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.345687 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:46Z","lastTransitionTime":"2026-02-02T17:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.365302 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 23:35:09.862641751 +0000 UTC Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.400279 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.400430 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:15:46 crc kubenswrapper[4858]: E0202 17:15:46.400575 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.400670 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:15:46 crc kubenswrapper[4858]: E0202 17:15:46.400744 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.400693 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:15:46 crc kubenswrapper[4858]: E0202 17:15:46.400896 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:15:46 crc kubenswrapper[4858]: E0202 17:15:46.401099 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.448227 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.448287 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.448304 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.448327 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.448344 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:46Z","lastTransitionTime":"2026-02-02T17:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.477903 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.479066 4858 scope.go:117] "RemoveContainer" containerID="51d9396f44f74445d0964a856afd2c936bd5d901265bcf8a214ddcbd748aa377" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.499381 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce5b4fa8ae1fbd371b3a75bf6a2b061749d1b158fbf24b60a0a6fbebeee5e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:46Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.519821 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:46Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.541055 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhk49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3c55b8-1193-47e3-ae7a-3a4b06df2884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd7f6cd899675ca4ccd2da634b9c4865505db0ac1ca0f2ee8e2c65516fd2c190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ksp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c9a3c7ca03f3b32cb484e20e016538920e05487b8c52cbe92c86e62ce30fe85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ksp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zhk49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:46Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.550942 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.551031 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.551056 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.551087 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.551109 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:46Z","lastTransitionTime":"2026-02-02T17:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.557416 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-t8jfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clqcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clqcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-t8jfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:46Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.575961 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc5a816-d9d0-41c0-877c-250e077ef445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 17:15:14.099362 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 17:15:14.102453 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2083573188/tls.crt::/tmp/serving-cert-2083573188/tls.key\\\\\\\"\\\\nI0202 17:15:19.591422 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 17:15:19.597363 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 17:15:19.597387 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 17:15:19.597410 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 17:15:19.597416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 17:15:19.647613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 17:15:19.647820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 17:15:19.647642 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 17:15:19.647909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 17:15:19.648000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 17:15:19.648027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 17:15:19.650075 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:46Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.590287 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580328d3eb1cbccbf31e6a18c6589dd3745f01a1451a9811a838f03a79ef444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:46Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.605831 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52b7595-c2f6-4d7b-a076-db28e0332d49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d64509908882d8a14b1836f1c667e92c12f407f5108009c8a63dd841d034afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae002ad3b0e97ea0ba6f6c7a1bee332c5a1644dabbe1af5ccf1b9325356c77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe23a7d9f061976dbd948d705fa9d73bd8a5f9f01304c4d966fcc4501d84cac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:46Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.619654 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:46Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.631586 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:46Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.652236 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6803a1fe-30ea-4c74-8a7f-743c3344d829\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925eeba4bab6bf2efe66b7e9bcc09451b9dd26cb3dc457fe4432b134ed68dfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://845157900bb10cd29ec008f561118df2cbfcc30684394dbc201ad8a966aa8e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e87a8afe124154c232c26f9a43969189b865225c0162799857847f10af1257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16072ce1c24706c267a0341779f05e661bac827ae871ecc732f81923bb63bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c692114482fe72631826c6af5eb14c5ef0fdfc5500ed0ea3fd96d6009d5be562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:46Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.652995 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.653026 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.653039 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.653055 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.653066 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:46Z","lastTransitionTime":"2026-02-02T17:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.669827 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://586c9f180c983beb2187348f2e68b1a8e550805c0f8035b72785f9845b00a14c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0629fa3631c6ec611ce9a9130532c9890bab8c4f4ca83042f9d4c8f1ec990690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:46Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.682313 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b070dee1026575fc13174bc4542b19d26a65f78951e524e035b6ce8be3d21e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:46Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.693803 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6hxtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f96e711-13fa-4105-b042-45fe046d3d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca0363b669895519cf2438c9ea033dc0227b3f6fea175835586f9dc1252688c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlbtc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6hxtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:46Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.705066 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ct8b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76546fe8-0dad-45f2-aac1-2ec02ec40898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0dc2e96af3e8efe18dbe2ac824f5305bfbc9fe047645fd495f9a6ca568ae1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggp5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ct8b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:46Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.717493 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9szlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc7963e-1bdc-4038-805e-bd72fc217a13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59b9be9d88ddb7f0a4b7c29467c616216c8b14502941cc1d6e43a922d903567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9szlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:46Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.728800 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80911936318b3a9de14a9f71a7437f0ee6c9fdcb56c2f4efc568a5543a6c86e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3060eb52c9c081c561e2b97588ea0418eadc0e227307715ef71d48fb42bbf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lbvl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:46Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.745067 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce405d19-c944-4a11-8195-bca9289b8d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d9396f44f74445d0964a856afd2c936bd5d901265bcf8a214ddcbd748aa377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d9396f44f74445d0964a856afd2c936bd5d901265bcf8a214ddcbd748aa377\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T17:15:32Z\\\",\\\"message\\\":\\\"467779 6318 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 17:15:32.468318 6318 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0202 17:15:32.468530 6318 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0202 17:15:32.469060 6318 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 17:15:32.469125 6318 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 17:15:32.469175 6318 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 17:15:32.469233 6318 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 17:15:32.469334 6318 factory.go:656] Stopping watch factory\\\\nI0202 17:15:32.469374 6318 ovnkube.go:599] Stopped ovnkube\\\\nI0202 17:15:32.469403 6318 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 17:15:32.469344 6318 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 17:15:32.469438 6318 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 17:15:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-wkm4w_openshift-ovn-kubernetes(ce405d19-c944-4a11-8195-bca9289b8d73)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wkm4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:46Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.745713 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wkm4w_ce405d19-c944-4a11-8195-bca9289b8d73/ovnkube-controller/1.log" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.749031 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" event={"ID":"ce405d19-c944-4a11-8195-bca9289b8d73","Type":"ContainerStarted","Data":"616f491a213328c1d78ab637e5e24b5438294b9a4d5b1db37e95f8b26cc2288d"} Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.749579 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.754924 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.754951 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.754960 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.754995 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.755006 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:46Z","lastTransitionTime":"2026-02-02T17:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.765292 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce5b4fa8ae1fbd371b3a75bf6a2b061749d1b158fbf24b60a0a6fbebeee5e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:46Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.777517 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:46Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.791828 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhk49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3c55b8-1193-47e3-ae7a-3a4b06df2884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd7f6cd899675ca4ccd2da634b9c4865505db0ac1ca0f2ee8e2c65516fd2c190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ksp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c9a3c7ca03f3b32cb484e20e016538920e05487b8c52cbe92c86e62ce30fe85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ksp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zhk49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:46Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.802023 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-t8jfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clqcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clqcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-t8jfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:46Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.814497 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc5a816-d9d0-41c0-877c-250e077ef445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 17:15:14.099362 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 17:15:14.102453 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2083573188/tls.crt::/tmp/serving-cert-2083573188/tls.key\\\\\\\"\\\\nI0202 17:15:19.591422 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 17:15:19.597363 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 17:15:19.597387 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 17:15:19.597410 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 17:15:19.597416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 17:15:19.647613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 17:15:19.647820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 17:15:19.647642 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 17:15:19.647909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 17:15:19.648000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 17:15:19.648027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 17:15:19.650075 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:46Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.827948 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580328d3eb1cbccbf31e6a18c6589dd3745f01a1451a9811a838f03a79ef444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:46Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.838796 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52b7595-c2f6-4d7b-a076-db28e0332d49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d64509908882d8a14b1836f1c667e92c12f407f5108009c8a63dd841d034afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae002ad3b0e97ea0ba6f6c7a1bee332c5a1644dabbe1af5ccf1b9325356c77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe23a7d9f061976dbd948d705fa9d73bd8a5f9f01304c4d966fcc4501d84cac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:46Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.851886 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:46Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.857059 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.857093 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.857104 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.857120 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.857132 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:46Z","lastTransitionTime":"2026-02-02T17:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.866777 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:46Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.888823 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce405d19-c944-4a11-8195-bca9289b8d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616f491a213328c1d78ab637e5e24b5438294b9a4d5b1db37e95f8b26cc2288d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d9396f44f74445d0964a856afd2c936bd5d901265bcf8a214ddcbd748aa377\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T17:15:32Z\\\",\\\"message\\\":\\\"467779 6318 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 17:15:32.468318 6318 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0202 17:15:32.468530 6318 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0202 17:15:32.469060 6318 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 17:15:32.469125 6318 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 17:15:32.469175 6318 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 17:15:32.469233 6318 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 17:15:32.469334 6318 factory.go:656] Stopping watch factory\\\\nI0202 17:15:32.469374 6318 ovnkube.go:599] Stopped ovnkube\\\\nI0202 17:15:32.469403 6318 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 17:15:32.469344 6318 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 17:15:32.469438 6318 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 17:15:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wkm4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:46Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.908872 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6803a1fe-30ea-4c74-8a7f-743c3344d829\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925eeba4bab6bf2efe66b7e9bcc09451b9dd26cb3dc457fe4432b134ed68dfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://845157900bb10cd29ec008f561118df2cbfcc30684394dbc201ad8a966aa8e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e87a8afe124154c232c26f9a43969189b865225c0162799857847f10af1257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16072ce1c24706c267a0341779f05e661bac827ae871ecc732f81923bb63bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c692114482fe72631826c6af5eb14c5ef0fdfc5500ed0ea3fd96d6009d5be562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:46Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.925602 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://586c9f180c983beb2187348f2e68b1a8e550805c0f8035b72785f9845b00a14c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0629fa3631c6ec611ce9a9130532c9890bab8c4f4ca83042f9d4c8f1ec990690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:46Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.938560 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b070dee1026575fc13174bc4542b19d26a65f78951e524e035b6ce8be3d21e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:46Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.956257 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6hxtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f96e711-13fa-4105-b042-45fe046d3d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca0363b669895519cf2438c9ea033dc0227b3f6fea175835586f9dc1252688c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlbtc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6hxtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:46Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.959788 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.959823 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.959833 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.959848 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.959857 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:46Z","lastTransitionTime":"2026-02-02T17:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.967772 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ct8b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76546fe8-0dad-45f2-aac1-2ec02ec40898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0dc2e96af3e8efe18dbe2ac824f5305bfbc9fe047645fd495f9a6ca568ae1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggp5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ct8b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:46Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:46 crc kubenswrapper[4858]: I0202 17:15:46.982990 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9szlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc7963e-1bdc-4038-805e-bd72fc217a13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59b9be9d88ddb7f0a4b7c29467c616216c8b14502941cc1d6e43a922d903567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9szlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:46Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.004219 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80911936318b3a9de14a9f71a7437f0ee6c9fdcb56c2f4efc568a5543a6c86e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3060eb52c9c081c561e2b97588ea0418eadc0e227307715ef71d48fb42bbf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lbvl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:47Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.061650 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.061683 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.061692 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.061704 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.061713 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:47Z","lastTransitionTime":"2026-02-02T17:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.164179 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.164422 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.164486 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.164550 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.164605 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:47Z","lastTransitionTime":"2026-02-02T17:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.266689 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.266953 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.267068 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.267142 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.267233 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:47Z","lastTransitionTime":"2026-02-02T17:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.366202 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 02:25:03.712099881 +0000 UTC Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.370698 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.370738 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.370756 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.370779 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.370796 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:47Z","lastTransitionTime":"2026-02-02T17:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.474485 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.474520 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.474530 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.474546 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.474558 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:47Z","lastTransitionTime":"2026-02-02T17:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.577321 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.577703 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.577898 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.578076 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.578221 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:47Z","lastTransitionTime":"2026-02-02T17:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.680659 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.680701 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.680712 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.680731 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.680752 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:47Z","lastTransitionTime":"2026-02-02T17:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.755292 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wkm4w_ce405d19-c944-4a11-8195-bca9289b8d73/ovnkube-controller/2.log" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.756495 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wkm4w_ce405d19-c944-4a11-8195-bca9289b8d73/ovnkube-controller/1.log" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.760922 4858 generic.go:334] "Generic (PLEG): container finished" podID="ce405d19-c944-4a11-8195-bca9289b8d73" containerID="616f491a213328c1d78ab637e5e24b5438294b9a4d5b1db37e95f8b26cc2288d" exitCode=1 Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.761019 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" event={"ID":"ce405d19-c944-4a11-8195-bca9289b8d73","Type":"ContainerDied","Data":"616f491a213328c1d78ab637e5e24b5438294b9a4d5b1db37e95f8b26cc2288d"} Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.761079 4858 scope.go:117] "RemoveContainer" containerID="51d9396f44f74445d0964a856afd2c936bd5d901265bcf8a214ddcbd748aa377" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.762042 4858 scope.go:117] "RemoveContainer" containerID="616f491a213328c1d78ab637e5e24b5438294b9a4d5b1db37e95f8b26cc2288d" Feb 02 17:15:47 crc kubenswrapper[4858]: E0202 17:15:47.762303 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wkm4w_openshift-ovn-kubernetes(ce405d19-c944-4a11-8195-bca9289b8d73)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.780323 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce5b4fa8ae1fbd371b3a75bf6a2b061749d1b158fbf24b60a0a6fbebeee5e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:47Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.783112 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.783139 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.783148 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.783161 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.783171 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:47Z","lastTransitionTime":"2026-02-02T17:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.793644 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:47Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.804256 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhk49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3c55b8-1193-47e3-ae7a-3a4b06df2884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd7f6cd899675ca4ccd2da634b9c4865505db0ac1ca0f2ee8e2c65516fd2c190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ksp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c9a3c7ca03f3b32cb484e20e016538920e05487b8c52cbe92c86e62ce30fe85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ksp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zhk49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:47Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.817292 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-t8jfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clqcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clqcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-t8jfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:47Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.833757 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc5a816-d9d0-41c0-877c-250e077ef445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 17:15:14.099362 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 17:15:14.102453 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2083573188/tls.crt::/tmp/serving-cert-2083573188/tls.key\\\\\\\"\\\\nI0202 17:15:19.591422 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 17:15:19.597363 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 17:15:19.597387 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 17:15:19.597410 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 17:15:19.597416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 17:15:19.647613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 17:15:19.647820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 17:15:19.647642 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 17:15:19.647909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 17:15:19.648000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 17:15:19.648027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 17:15:19.650075 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:47Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.853997 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580328d3eb1cbccbf31e6a18c6589dd3745f01a1451a9811a838f03a79ef444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:47Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.870280 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52b7595-c2f6-4d7b-a076-db28e0332d49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d64509908882d8a14b1836f1c667e92c12f407f5108009c8a63dd841d034afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae002ad3b0e97ea0ba6f6c7a1bee332c5a1644dabbe1af5ccf1b9325356c77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe23a7d9f061976dbd948d705fa9d73bd8a5f9f01304c4d966fcc4501d84cac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:47Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.883419 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:47Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.885824 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.885900 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.885920 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.885947 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.885966 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:47Z","lastTransitionTime":"2026-02-02T17:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.899754 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:47Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.927036 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6803a1fe-30ea-4c74-8a7f-743c3344d829\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925eeba4bab6bf2efe66b7e9bcc09451b9dd26cb3dc457fe4432b134ed68dfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://845157900bb10cd29ec008f561118df2cbfcc30684394dbc201ad8a966aa8e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e87a8afe124154c232c26f9a43969189b865225c0162799857847f10af1257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16072ce1c24706c267a0341779f05e661bac827ae871ecc732f81923bb63bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c692114482fe72631826c6af5eb14c5ef0fdfc5500ed0ea3fd96d6009d5be562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:47Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.940676 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://586c9f180c983beb2187348f2e68b1a8e550805c0f8035b72785f9845b00a14c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0629fa3631c6ec611ce9a9130532c9890bab8c4f4ca83042f9d4c8f1ec990690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:47Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.956205 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b070dee1026575fc13174bc4542b19d26a65f78951e524e035b6ce8be3d21e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:47Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.972595 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6hxtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f96e711-13fa-4105-b042-45fe046d3d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca0363b669895519cf2438c9ea033dc0227b3f6fea175835586f9dc1252688c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlbtc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6hxtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:47Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.985436 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ct8b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76546fe8-0dad-45f2-aac1-2ec02ec40898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0dc2e96af3e8efe18dbe2ac824f5305bfbc9fe047645fd495f9a6ca568ae1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggp5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ct8b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:47Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.988122 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.988178 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.988189 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.988203 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:47 crc kubenswrapper[4858]: I0202 17:15:47.988212 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:47Z","lastTransitionTime":"2026-02-02T17:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.004567 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9szlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc7963e-1bdc-4038-805e-bd72fc217a13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59b9be9d88ddb7f0a4b7c29467c616216c8b14502941cc1d6e43a922d903567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9szlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:48Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.019794 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80911936318b3a9de14a9f71a7437f0ee6c9fdcb56c2f4efc568a5543a6c86e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3060eb52c9c081c561e2b97588ea0418eadc0e227307715ef71d48fb42bbf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lbvl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:48Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.052674 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce405d19-c944-4a11-8195-bca9289b8d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616f491a213328c1d78ab637e5e24b5438294b9a4d5b1db37e95f8b26cc2288d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d9396f44f74445d0964a856afd2c936bd5d901265bcf8a214ddcbd748aa377\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T17:15:32Z\\\",\\\"message\\\":\\\"467779 6318 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 17:15:32.468318 6318 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0202 17:15:32.468530 6318 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0202 17:15:32.469060 6318 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 17:15:32.469125 6318 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 17:15:32.469175 6318 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 17:15:32.469233 6318 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 17:15:32.469334 6318 factory.go:656] Stopping watch factory\\\\nI0202 17:15:32.469374 6318 ovnkube.go:599] Stopped ovnkube\\\\nI0202 17:15:32.469403 6318 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 17:15:32.469344 6318 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 17:15:32.469438 6318 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 17:15:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616f491a213328c1d78ab637e5e24b5438294b9a4d5b1db37e95f8b26cc2288d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T17:15:47Z\\\",\\\"message\\\":\\\"2 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 17:15:47.451911 6522 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 17:15:47.451888 6522 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 17:15:47.452188 6522 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 17:15:47.452211 6522 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 17:15:47.452238 6522 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 17:15:47.452259 6522 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 17:15:47.452274 6522 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 17:15:47.453331 6522 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 17:15:47.453365 6522 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 17:15:47.453413 6522 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 17:15:47.453474 6522 factory.go:656] Stopping watch factory\\\\nI0202 17:15:47.453506 6522 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 17:15:47.453520 6522 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 17:15:47.453532 6522 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wkm4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:48Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.090713 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.090746 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.090754 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.090767 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.090776 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:48Z","lastTransitionTime":"2026-02-02T17:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.194302 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.194342 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.194370 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.194385 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.194394 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:48Z","lastTransitionTime":"2026-02-02T17:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.297828 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.297888 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.297907 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.297930 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.297951 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:48Z","lastTransitionTime":"2026-02-02T17:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.366841 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 14:40:09.855578181 +0000 UTC Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.399912 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.400002 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:15:48 crc kubenswrapper[4858]: E0202 17:15:48.400066 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.399923 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:15:48 crc kubenswrapper[4858]: E0202 17:15:48.400164 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.400370 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:15:48 crc kubenswrapper[4858]: E0202 17:15:48.400367 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.400417 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:48 crc kubenswrapper[4858]: E0202 17:15:48.400449 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.400480 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.400509 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.400537 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.400559 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:48Z","lastTransitionTime":"2026-02-02T17:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.503761 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.503839 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.503861 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.503892 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.503915 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:48Z","lastTransitionTime":"2026-02-02T17:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.605846 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.605877 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.605885 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.605898 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.605908 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:48Z","lastTransitionTime":"2026-02-02T17:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.709258 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.709311 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.709323 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.709343 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.709356 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:48Z","lastTransitionTime":"2026-02-02T17:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.767240 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wkm4w_ce405d19-c944-4a11-8195-bca9289b8d73/ovnkube-controller/2.log" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.774016 4858 scope.go:117] "RemoveContainer" containerID="616f491a213328c1d78ab637e5e24b5438294b9a4d5b1db37e95f8b26cc2288d" Feb 02 17:15:48 crc kubenswrapper[4858]: E0202 17:15:48.774187 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wkm4w_openshift-ovn-kubernetes(ce405d19-c944-4a11-8195-bca9289b8d73)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.793059 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:48Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.812341 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.812449 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.812478 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.812509 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.812533 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:48Z","lastTransitionTime":"2026-02-02T17:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.815697 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhk49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3c55b8-1193-47e3-ae7a-3a4b06df2884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd7f6cd899675ca4ccd2da634b9c4865505db0ac1ca0f2ee8e2c65516fd2c190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ksp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c9a3c7ca03f3b32cb484e20e016538920e05487b8c52cbe92c86e62ce30fe85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ksp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zhk49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:48Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.831048 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-t8jfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clqcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clqcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-t8jfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:48Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.848652 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce5b4fa8ae1fbd371b3a75bf6a2b061749d1b158fbf24b60a0a6fbebeee5e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:48Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.871188 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580328d3eb1cbccbf31e6a18c6589dd3745f01a1451a9811a838f03a79ef444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:48Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.900167 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc5a816-d9d0-41c0-877c-250e077ef445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 17:15:14.099362 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 17:15:14.102453 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2083573188/tls.crt::/tmp/serving-cert-2083573188/tls.key\\\\\\\"\\\\nI0202 17:15:19.591422 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 17:15:19.597363 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 17:15:19.597387 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 17:15:19.597410 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 17:15:19.597416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 17:15:19.647613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 17:15:19.647820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 17:15:19.647642 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 17:15:19.647909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 17:15:19.648000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 17:15:19.648027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 17:15:19.650075 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:48Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.915711 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.915779 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.915803 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.915834 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.915860 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:48Z","lastTransitionTime":"2026-02-02T17:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.923388 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:48Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.942824 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52b7595-c2f6-4d7b-a076-db28e0332d49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d64509908882d8a14b1836f1c667e92c12f407f5108009c8a63dd841d034afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae002ad3b0e97ea0ba6f6c7a1bee332c5a1644dabbe1af5ccf1b9325356c77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe23a7d9f061976dbd948d705fa9d73bd8a5f9f01304c4d966fcc4501d84cac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:48Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.960522 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:48Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.977671 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b070dee1026575fc13174bc4542b19d26a65f78951e524e035b6ce8be3d21e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:48Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:48 crc kubenswrapper[4858]: I0202 17:15:48.992836 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6hxtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f96e711-13fa-4105-b042-45fe046d3d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca0363b669895519cf2438c9ea033dc0227b3f6fea175835586f9dc1252688c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlbtc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6hxtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:48Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.007652 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ct8b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76546fe8-0dad-45f2-aac1-2ec02ec40898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0dc2e96af3e8efe18dbe2ac824f5305bfbc9fe047645fd495f9a6ca568ae1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggp5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ct8b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:49Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.019300 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.019382 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.019399 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.019811 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.019848 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:49Z","lastTransitionTime":"2026-02-02T17:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.022656 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9szlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc7963e-1bdc-4038-805e-bd72fc217a13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59b9be9d88ddb7f0a4b7c29467c616216c8b14502941cc1d6e43a922d903567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9szlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:49Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.037819 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80911936318b3a9de14a9f71a7437f0ee6c9fdcb56c2f4efc568a5543a6c86e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3060eb52c9c081c561e2b97588ea0418eadc0e227307715ef71d48fb42bbf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lbvl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:49Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.057722 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce405d19-c944-4a11-8195-bca9289b8d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616f491a213328c1d78ab637e5e24b5438294b9a4d5b1db37e95f8b26cc2288d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616f491a213328c1d78ab637e5e24b5438294b9a4d5b1db37e95f8b26cc2288d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T17:15:47Z\\\",\\\"message\\\":\\\"2 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 17:15:47.451911 6522 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 17:15:47.451888 6522 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 17:15:47.452188 6522 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 17:15:47.452211 6522 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 17:15:47.452238 6522 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 17:15:47.452259 6522 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 17:15:47.452274 6522 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 17:15:47.453331 6522 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 17:15:47.453365 6522 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 17:15:47.453413 6522 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 17:15:47.453474 6522 factory.go:656] Stopping watch factory\\\\nI0202 17:15:47.453506 6522 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 17:15:47.453520 6522 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 17:15:47.453532 6522 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wkm4w_openshift-ovn-kubernetes(ce405d19-c944-4a11-8195-bca9289b8d73)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wkm4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:49Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.089345 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6803a1fe-30ea-4c74-8a7f-743c3344d829\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925eeba4bab6bf2efe66b7e9bcc09451b9dd26cb3dc457fe4432b134ed68dfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://845157900bb10cd29ec008f561118df2cbfcc30684394dbc201ad8a966aa8e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e87a8afe124154c232c26f9a43969189b865225c0162799857847f10af1257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16072ce1c24706c267a0341779f05e661bac827ae871ecc732f81923bb63bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c692114482fe72631826c6af5eb14c5ef0fdfc5500ed0ea3fd96d6009d5be562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:49Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.107361 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://586c9f180c983beb2187348f2e68b1a8e550805c0f8035b72785f9845b00a14c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0629fa3631c6ec611ce9a9130532c9890bab8c4f4ca83042f9d4c8f1ec990690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:49Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.122461 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.122512 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.122529 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.122553 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.122570 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:49Z","lastTransitionTime":"2026-02-02T17:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.225851 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.225930 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.225955 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.226023 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.226053 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:49Z","lastTransitionTime":"2026-02-02T17:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.330478 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.330524 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.330532 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.330546 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.330556 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:49Z","lastTransitionTime":"2026-02-02T17:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.367769 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 09:56:21.997332468 +0000 UTC Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.432457 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.432497 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.432506 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.432521 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.432531 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:49Z","lastTransitionTime":"2026-02-02T17:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.536092 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.536130 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.536138 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.536151 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.536160 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:49Z","lastTransitionTime":"2026-02-02T17:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.640102 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.640165 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.640182 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.640205 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.640223 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:49Z","lastTransitionTime":"2026-02-02T17:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.742649 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.742689 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.742699 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.742715 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.742726 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:49Z","lastTransitionTime":"2026-02-02T17:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.845876 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.846046 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.846067 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.846129 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.846155 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:49Z","lastTransitionTime":"2026-02-02T17:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.949473 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.949538 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.949548 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.949565 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:49 crc kubenswrapper[4858]: I0202 17:15:49.949576 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:49Z","lastTransitionTime":"2026-02-02T17:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.052500 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.052567 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.052583 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.052639 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.052661 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:50Z","lastTransitionTime":"2026-02-02T17:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.156107 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.156147 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.156155 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.156172 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.156198 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:50Z","lastTransitionTime":"2026-02-02T17:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.259567 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.259631 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.259648 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.259671 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.259690 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:50Z","lastTransitionTime":"2026-02-02T17:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.363512 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.363587 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.363605 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.363631 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.363651 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:50Z","lastTransitionTime":"2026-02-02T17:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.368839 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 16:22:32.759956263 +0000 UTC Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.400326 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.400385 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.400387 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:15:50 crc kubenswrapper[4858]: E0202 17:15:50.400496 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.400511 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:15:50 crc kubenswrapper[4858]: E0202 17:15:50.400644 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:15:50 crc kubenswrapper[4858]: E0202 17:15:50.400779 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:15:50 crc kubenswrapper[4858]: E0202 17:15:50.400941 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.428958 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce5b4fa8ae1fbd371b3a75bf6a2b061749d1b158fbf24b60a0a6fbebeee5e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:50Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.449847 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:50Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.470955 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.471071 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.471097 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.471144 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.471203 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:50Z","lastTransitionTime":"2026-02-02T17:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.475104 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhk49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3c55b8-1193-47e3-ae7a-3a4b06df2884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd7f6cd899675ca4ccd2da634b9c4865505db0ac1ca0f2ee8e2c65516fd2c190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ksp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c9a3c7ca03f3b32cb484e20e016538920e05487b8c52cbe92c86e62ce30fe85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ksp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zhk49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:50Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.496171 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-t8jfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clqcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clqcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-t8jfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:50Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.518329 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc5a816-d9d0-41c0-877c-250e077ef445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 17:15:14.099362 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 17:15:14.102453 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2083573188/tls.crt::/tmp/serving-cert-2083573188/tls.key\\\\\\\"\\\\nI0202 17:15:19.591422 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 17:15:19.597363 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 17:15:19.597387 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 17:15:19.597410 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 17:15:19.597416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 17:15:19.647613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 17:15:19.647820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 17:15:19.647642 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 17:15:19.647909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 17:15:19.648000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 17:15:19.648027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 17:15:19.650075 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:50Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.533209 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122-metrics-certs\") pod \"network-metrics-daemon-t8jfm\" (UID: \"8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122\") " pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:15:50 crc kubenswrapper[4858]: E0202 17:15:50.533365 4858 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 17:15:50 crc kubenswrapper[4858]: E0202 17:15:50.533441 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122-metrics-certs podName:8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122 nodeName:}" failed. No retries permitted until 2026-02-02 17:16:06.533419746 +0000 UTC m=+67.685835021 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122-metrics-certs") pod "network-metrics-daemon-t8jfm" (UID: "8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.541747 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580328d3eb1cbccbf31e6a18c6589dd3745f01a1451a9811a838f03a79ef444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:50Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.556863 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52b7595-c2f6-4d7b-a076-db28e0332d49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d64509908882d8a14b1836f1c667e92c12f407f5108009c8a63dd841d034afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae002ad3b0e97ea0ba6f6c7a1bee332c5a1644dabbe1af5ccf1b9325356c77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe23a7d9f061976dbd948d705fa9d73bd8a5f9f01304c4d966fcc4501d84cac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:50Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.573154 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:50Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.575710 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.575761 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.575775 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.575795 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.575810 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:50Z","lastTransitionTime":"2026-02-02T17:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.589421 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:50Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.614649 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce405d19-c944-4a11-8195-bca9289b8d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616f491a213328c1d78ab637e5e24b5438294b9a4d5b1db37e95f8b26cc2288d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616f491a213328c1d78ab637e5e24b5438294b9a4d5b1db37e95f8b26cc2288d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T17:15:47Z\\\",\\\"message\\\":\\\"2 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 17:15:47.451911 6522 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 17:15:47.451888 6522 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 17:15:47.452188 6522 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 17:15:47.452211 6522 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 17:15:47.452238 6522 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 17:15:47.452259 6522 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 17:15:47.452274 6522 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 17:15:47.453331 6522 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 17:15:47.453365 6522 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 17:15:47.453413 6522 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 17:15:47.453474 6522 factory.go:656] Stopping watch factory\\\\nI0202 17:15:47.453506 6522 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 17:15:47.453520 6522 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 17:15:47.453532 6522 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wkm4w_openshift-ovn-kubernetes(ce405d19-c944-4a11-8195-bca9289b8d73)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wkm4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:50Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.646582 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6803a1fe-30ea-4c74-8a7f-743c3344d829\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925eeba4bab6bf2efe66b7e9bcc09451b9dd26cb3dc457fe4432b134ed68dfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://845157900bb10cd29ec008f561118df2cbfcc30684394dbc201ad8a966aa8e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e87a8afe124154c232c26f9a43969189b865225c0162799857847f10af1257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16072ce1c24706c267a0341779f05e661bac827ae871ecc732f81923bb63bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c692114482fe72631826c6af5eb14c5ef0fdfc5500ed0ea3fd96d6009d5be562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:50Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.668139 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://586c9f180c983beb2187348f2e68b1a8e550805c0f8035b72785f9845b00a14c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0629fa3631c6ec611ce9a9130532c9890bab8c4f4ca83042f9d4c8f1ec990690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:50Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.677851 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.677900 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.677916 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.677941 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.677958 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:50Z","lastTransitionTime":"2026-02-02T17:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.689546 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b070dee1026575fc13174bc4542b19d26a65f78951e524e035b6ce8be3d21e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:50Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.705815 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6hxtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f96e711-13fa-4105-b042-45fe046d3d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca0363b669895519cf2438c9ea033dc0227b3f6fea175835586f9dc1252688c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlbtc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6hxtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:50Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.720802 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ct8b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76546fe8-0dad-45f2-aac1-2ec02ec40898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0dc2e96af3e8efe18dbe2ac824f5305bfbc9fe047645fd495f9a6ca568ae1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggp5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ct8b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:50Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.741323 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9szlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc7963e-1bdc-4038-805e-bd72fc217a13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59b9be9d88ddb7f0a4b7c29467c616216c8b14502941cc1d6e43a922d903567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9szlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:50Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.758914 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80911936318b3a9de14a9f71a7437f0ee6c9fdcb56c2f4efc568a5543a6c86e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3060eb52c9c081c561e2b97588ea0418eadc0e227307715ef71d48fb42bbf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lbvl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:50Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.780245 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.780347 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.780367 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.780425 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.780447 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:50Z","lastTransitionTime":"2026-02-02T17:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.883962 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.884074 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.884092 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.884117 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.884141 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:50Z","lastTransitionTime":"2026-02-02T17:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.987348 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.987448 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.987465 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.987492 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:50 crc kubenswrapper[4858]: I0202 17:15:50.987510 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:50Z","lastTransitionTime":"2026-02-02T17:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.091423 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.091514 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.091539 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.091607 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.091630 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:51Z","lastTransitionTime":"2026-02-02T17:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.194790 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.194851 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.194872 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.194904 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.194930 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:51Z","lastTransitionTime":"2026-02-02T17:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.299152 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.299221 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.299244 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.299273 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.299295 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:51Z","lastTransitionTime":"2026-02-02T17:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.369185 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 16:00:11.632390879 +0000 UTC Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.402879 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.402943 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.402966 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.403025 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.403048 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:51Z","lastTransitionTime":"2026-02-02T17:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.506505 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.506774 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.506796 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.506832 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.506850 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:51Z","lastTransitionTime":"2026-02-02T17:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.610188 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.610260 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.610279 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.610305 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.610323 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:51Z","lastTransitionTime":"2026-02-02T17:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.713753 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.713809 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.713825 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.713849 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.713866 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:51Z","lastTransitionTime":"2026-02-02T17:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.816704 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.816749 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.816760 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.816778 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.816790 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:51Z","lastTransitionTime":"2026-02-02T17:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.920745 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.920805 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.920820 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.920845 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:51 crc kubenswrapper[4858]: I0202 17:15:51.920861 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:51Z","lastTransitionTime":"2026-02-02T17:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.023966 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.024070 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.024090 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.024121 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.024146 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:52Z","lastTransitionTime":"2026-02-02T17:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.127293 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.127696 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.127774 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.127845 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.127950 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:52Z","lastTransitionTime":"2026-02-02T17:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.230889 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.230945 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.230962 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.231023 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.231046 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:52Z","lastTransitionTime":"2026-02-02T17:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.251362 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:15:52 crc kubenswrapper[4858]: E0202 17:15:52.251748 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:16:24.251715123 +0000 UTC m=+85.404130428 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.252296 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:15:52 crc kubenswrapper[4858]: E0202 17:15:52.252528 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 17:15:52 crc kubenswrapper[4858]: E0202 17:15:52.252567 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 17:15:52 crc kubenswrapper[4858]: E0202 17:15:52.252587 4858 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 17:15:52 crc kubenswrapper[4858]: E0202 17:15:52.252812 4858 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 17:15:52 crc kubenswrapper[4858]: E0202 17:15:52.252925 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 17:16:24.252897645 +0000 UTC m=+85.405312950 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 17:15:52 crc kubenswrapper[4858]: E0202 17:15:52.252956 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 17:16:24.252940746 +0000 UTC m=+85.405356041 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.252842 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.253430 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:15:52 crc kubenswrapper[4858]: E0202 17:15:52.253552 4858 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 17:15:52 crc kubenswrapper[4858]: E0202 17:15:52.253906 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 17:16:24.253882872 +0000 UTC m=+85.406298177 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.334113 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.334172 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.334190 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.334213 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.334230 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:52Z","lastTransitionTime":"2026-02-02T17:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.355069 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:15:52 crc kubenswrapper[4858]: E0202 17:15:52.355388 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 17:15:52 crc kubenswrapper[4858]: E0202 17:15:52.355438 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 17:15:52 crc kubenswrapper[4858]: E0202 17:15:52.355457 4858 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 17:15:52 crc kubenswrapper[4858]: E0202 17:15:52.355532 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 17:16:24.355510315 +0000 UTC m=+85.507925610 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.370020 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 21:33:56.951110352 +0000 UTC Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.400720 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.400790 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.400720 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.400920 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:15:52 crc kubenswrapper[4858]: E0202 17:15:52.401144 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:15:52 crc kubenswrapper[4858]: E0202 17:15:52.401389 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:15:52 crc kubenswrapper[4858]: E0202 17:15:52.401473 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:15:52 crc kubenswrapper[4858]: E0202 17:15:52.401644 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.437795 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.437849 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.437892 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.437921 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.437946 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:52Z","lastTransitionTime":"2026-02-02T17:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.540967 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.541033 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.541045 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.541063 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.541075 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:52Z","lastTransitionTime":"2026-02-02T17:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.643968 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.644027 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.644039 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.644051 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.644061 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:52Z","lastTransitionTime":"2026-02-02T17:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.746906 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.746949 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.746962 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.747019 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.747034 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:52Z","lastTransitionTime":"2026-02-02T17:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.849914 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.849966 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.850003 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.850024 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.850040 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:52Z","lastTransitionTime":"2026-02-02T17:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.954281 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.954368 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.954701 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.954770 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:52 crc kubenswrapper[4858]: I0202 17:15:52.954788 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:52Z","lastTransitionTime":"2026-02-02T17:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.058496 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.058615 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.058637 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.058668 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.058692 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:53Z","lastTransitionTime":"2026-02-02T17:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.161521 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.161585 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.161602 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.161647 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.161666 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:53Z","lastTransitionTime":"2026-02-02T17:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.264268 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.264314 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.264326 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.264343 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.264357 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:53Z","lastTransitionTime":"2026-02-02T17:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.378563 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 20:31:02.350571722 +0000 UTC Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.379848 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.379891 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.379899 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.379914 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.379924 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:53Z","lastTransitionTime":"2026-02-02T17:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.482913 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.483404 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.483551 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.483708 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.483830 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:53Z","lastTransitionTime":"2026-02-02T17:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.587132 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.587189 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.587206 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.587231 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.587248 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:53Z","lastTransitionTime":"2026-02-02T17:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.690205 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.690254 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.690265 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.690282 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.690294 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:53Z","lastTransitionTime":"2026-02-02T17:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.792945 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.793085 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.793109 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.793135 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.793155 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:53Z","lastTransitionTime":"2026-02-02T17:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.897237 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.897323 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.897347 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.897376 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:53 crc kubenswrapper[4858]: I0202 17:15:53.897395 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:53Z","lastTransitionTime":"2026-02-02T17:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.000426 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.000499 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.000517 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.000545 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.000563 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:54Z","lastTransitionTime":"2026-02-02T17:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.104134 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.104188 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.104199 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.104218 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.104232 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:54Z","lastTransitionTime":"2026-02-02T17:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.207333 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.207409 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.207432 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.207460 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.207480 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:54Z","lastTransitionTime":"2026-02-02T17:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.310958 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.311116 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.311135 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.311180 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.311198 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:54Z","lastTransitionTime":"2026-02-02T17:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.378717 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 01:56:27.313814341 +0000 UTC Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.400422 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.400504 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:15:54 crc kubenswrapper[4858]: E0202 17:15:54.400607 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.400445 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.400632 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:15:54 crc kubenswrapper[4858]: E0202 17:15:54.400757 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:15:54 crc kubenswrapper[4858]: E0202 17:15:54.400882 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:15:54 crc kubenswrapper[4858]: E0202 17:15:54.401023 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.414230 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.414275 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.414286 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.414304 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.414317 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:54Z","lastTransitionTime":"2026-02-02T17:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.518376 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.518438 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.518457 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.518482 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.518505 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:54Z","lastTransitionTime":"2026-02-02T17:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.621748 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.621818 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.621836 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.621860 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.621878 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:54Z","lastTransitionTime":"2026-02-02T17:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.724870 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.725218 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.725364 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.725583 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.725732 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:54Z","lastTransitionTime":"2026-02-02T17:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.779359 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.792635 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.798662 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52b7595-c2f6-4d7b-a076-db28e0332d49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d64509908882d8a14b1836f1c667e92c12f407f5108009c8a63dd841d034afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae002ad3b0e97ea0ba6f6c7a1bee332c5a1644dabbe1af5ccf1b9325356c77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe23a7d9f061976dbd948d705fa9d73bd8a5f9f01304c4d966fcc4501d84cac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:54Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.819915 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:54Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.828708 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.828758 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.828782 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.828806 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.828824 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:54Z","lastTransitionTime":"2026-02-02T17:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.838738 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:54Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.875371 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6803a1fe-30ea-4c74-8a7f-743c3344d829\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925eeba4bab6bf2efe66b7e9bcc09451b9dd26cb3dc457fe4432b134ed68dfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://845157900bb10cd29ec008f561118df2cbfcc30684394dbc201ad8a966aa8e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e87a8afe124154c232c26f9a43969189b865225c0162799857847f10af1257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16072ce1c24706c267a0341779f05e661bac827ae871ecc732f81923bb63bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c692114482fe72631826c6af5eb14c5ef0fdfc5500ed0ea3fd96d6009d5be562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:54Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.892705 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://586c9f180c983beb2187348f2e68b1a8e550805c0f8035b72785f9845b00a14c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0629fa3631c6ec611ce9a9130532c9890bab8c4f4ca83042f9d4c8f1ec990690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:54Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.911876 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b070dee1026575fc13174bc4542b19d26a65f78951e524e035b6ce8be3d21e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:54Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.926243 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6hxtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f96e711-13fa-4105-b042-45fe046d3d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca0363b669895519cf2438c9ea033dc0227b3f6fea175835586f9dc1252688c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlbtc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6hxtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:54Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.930905 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.930955 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.931019 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.931066 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.931084 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:54Z","lastTransitionTime":"2026-02-02T17:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.940181 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ct8b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76546fe8-0dad-45f2-aac1-2ec02ec40898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0dc2e96af3e8efe18dbe2ac824f5305bfbc9fe047645fd495f9a6ca568ae1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggp5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ct8b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:54Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.961113 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9szlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc7963e-1bdc-4038-805e-bd72fc217a13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59b9be9d88ddb7f0a4b7c29467c616216c8b14502941cc1d6e43a922d903567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9szlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:54Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:54 crc kubenswrapper[4858]: I0202 17:15:54.976335 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80911936318b3a9de14a9f71a7437f0ee6c9fdcb56c2f4efc568a5543a6c86e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3060eb52c9c081c561e2b97588ea0418eadc0e227307715ef71d48fb42bbf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lbvl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:54Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.001970 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce405d19-c944-4a11-8195-bca9289b8d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616f491a213328c1d78ab637e5e24b5438294b9a4d5b1db37e95f8b26cc2288d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616f491a213328c1d78ab637e5e24b5438294b9a4d5b1db37e95f8b26cc2288d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T17:15:47Z\\\",\\\"message\\\":\\\"2 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 17:15:47.451911 6522 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 17:15:47.451888 6522 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 17:15:47.452188 6522 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 17:15:47.452211 6522 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 17:15:47.452238 6522 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 17:15:47.452259 6522 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 17:15:47.452274 6522 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 17:15:47.453331 6522 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 17:15:47.453365 6522 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 17:15:47.453413 6522 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 17:15:47.453474 6522 factory.go:656] Stopping watch factory\\\\nI0202 17:15:47.453506 6522 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 17:15:47.453520 6522 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 17:15:47.453532 6522 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wkm4w_openshift-ovn-kubernetes(ce405d19-c944-4a11-8195-bca9289b8d73)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wkm4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:54Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.016746 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce5b4fa8ae1fbd371b3a75bf6a2b061749d1b158fbf24b60a0a6fbebeee5e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:55Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.033938 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.034187 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.034307 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.034408 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.034506 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:55Z","lastTransitionTime":"2026-02-02T17:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.034702 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:55Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.047920 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhk49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3c55b8-1193-47e3-ae7a-3a4b06df2884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd7f6cd899675ca4ccd2da634b9c4865505db0ac1ca0f2ee8e2c65516fd2c190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ksp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c9a3c7ca03f3b32cb484e20e016538920e05487b8c52cbe92c86e62ce30fe85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ksp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zhk49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:55Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.059321 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-t8jfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clqcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clqcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-t8jfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:55Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.076271 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc5a816-d9d0-41c0-877c-250e077ef445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 17:15:14.099362 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 17:15:14.102453 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2083573188/tls.crt::/tmp/serving-cert-2083573188/tls.key\\\\\\\"\\\\nI0202 17:15:19.591422 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 17:15:19.597363 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 17:15:19.597387 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 17:15:19.597410 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 17:15:19.597416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 17:15:19.647613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 17:15:19.647820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 17:15:19.647642 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 17:15:19.647909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 17:15:19.648000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 17:15:19.648027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 17:15:19.650075 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:55Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.096101 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580328d3eb1cbccbf31e6a18c6589dd3745f01a1451a9811a838f03a79ef444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:55Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.136986 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.137026 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.137038 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.137056 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.137069 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:55Z","lastTransitionTime":"2026-02-02T17:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.239653 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.239702 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.239719 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.239746 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.239761 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:55Z","lastTransitionTime":"2026-02-02T17:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.342248 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.342707 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.343375 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.343402 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.343414 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:55Z","lastTransitionTime":"2026-02-02T17:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.379526 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 20:23:40.205070928 +0000 UTC Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.446707 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.446769 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.446785 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.446809 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.446830 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:55Z","lastTransitionTime":"2026-02-02T17:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.550463 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.550545 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.550564 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.550591 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.550612 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:55Z","lastTransitionTime":"2026-02-02T17:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.653364 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.653410 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.653425 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.653447 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.653471 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:55Z","lastTransitionTime":"2026-02-02T17:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.756826 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.756888 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.756903 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.756961 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.757003 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:55Z","lastTransitionTime":"2026-02-02T17:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.859644 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.859679 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.859690 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.859706 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.859719 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:55Z","lastTransitionTime":"2026-02-02T17:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.961913 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.961939 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.961946 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.961958 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:55 crc kubenswrapper[4858]: I0202 17:15:55.961966 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:55Z","lastTransitionTime":"2026-02-02T17:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.065299 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.065365 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.065385 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.065410 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.065430 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:56Z","lastTransitionTime":"2026-02-02T17:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.168212 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.168310 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.168328 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.168352 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.168370 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:56Z","lastTransitionTime":"2026-02-02T17:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.270534 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.270611 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.270634 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.270667 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.270689 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:56Z","lastTransitionTime":"2026-02-02T17:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.374317 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.374380 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.374395 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.374417 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.374433 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:56Z","lastTransitionTime":"2026-02-02T17:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.380558 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 19:04:15.534688792 +0000 UTC Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.399993 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.400097 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.400126 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.400151 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:15:56 crc kubenswrapper[4858]: E0202 17:15:56.400316 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:15:56 crc kubenswrapper[4858]: E0202 17:15:56.400545 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:15:56 crc kubenswrapper[4858]: E0202 17:15:56.400754 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:15:56 crc kubenswrapper[4858]: E0202 17:15:56.400908 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.477605 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.477680 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.477704 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.477735 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.477758 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:56Z","lastTransitionTime":"2026-02-02T17:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.580819 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.580876 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.580893 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.580919 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.580939 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:56Z","lastTransitionTime":"2026-02-02T17:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.672518 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.672583 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.672600 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.672627 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.672651 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:56Z","lastTransitionTime":"2026-02-02T17:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:56 crc kubenswrapper[4858]: E0202 17:15:56.694567 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b1513e64-30b8-48d2-874a-29d4cc9d3b3d\\\",\\\"systemUUID\\\":\\\"152e49e6-c863-4d14-b212-d4d9f0b62e1a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:56Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.699926 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.700082 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.700110 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.700141 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.700164 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:56Z","lastTransitionTime":"2026-02-02T17:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:56 crc kubenswrapper[4858]: E0202 17:15:56.719336 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b1513e64-30b8-48d2-874a-29d4cc9d3b3d\\\",\\\"systemUUID\\\":\\\"152e49e6-c863-4d14-b212-d4d9f0b62e1a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:56Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.723629 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.723696 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.723719 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.723746 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.723763 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:56Z","lastTransitionTime":"2026-02-02T17:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:56 crc kubenswrapper[4858]: E0202 17:15:56.743263 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b1513e64-30b8-48d2-874a-29d4cc9d3b3d\\\",\\\"systemUUID\\\":\\\"152e49e6-c863-4d14-b212-d4d9f0b62e1a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:56Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.748115 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.748209 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.748228 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.748258 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.748283 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:56Z","lastTransitionTime":"2026-02-02T17:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:56 crc kubenswrapper[4858]: E0202 17:15:56.769905 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b1513e64-30b8-48d2-874a-29d4cc9d3b3d\\\",\\\"systemUUID\\\":\\\"152e49e6-c863-4d14-b212-d4d9f0b62e1a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:56Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.775856 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.776127 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.776257 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.776387 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.776523 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:56Z","lastTransitionTime":"2026-02-02T17:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:56 crc kubenswrapper[4858]: E0202 17:15:56.796802 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:15:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b1513e64-30b8-48d2-874a-29d4cc9d3b3d\\\",\\\"systemUUID\\\":\\\"152e49e6-c863-4d14-b212-d4d9f0b62e1a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:15:56Z is after 2025-08-24T17:21:41Z" Feb 02 17:15:56 crc kubenswrapper[4858]: E0202 17:15:56.797422 4858 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.799745 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.800048 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.800225 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.800511 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.800740 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:56Z","lastTransitionTime":"2026-02-02T17:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.904248 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.904295 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.904311 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.904336 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:56 crc kubenswrapper[4858]: I0202 17:15:56.904354 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:56Z","lastTransitionTime":"2026-02-02T17:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.013648 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.013703 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.013719 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.013743 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.013761 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:57Z","lastTransitionTime":"2026-02-02T17:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.117032 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.117099 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.117116 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.117142 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.117162 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:57Z","lastTransitionTime":"2026-02-02T17:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.220456 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.220547 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.220565 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.220588 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.220605 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:57Z","lastTransitionTime":"2026-02-02T17:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.323178 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.323603 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.323801 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.323947 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.324184 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:57Z","lastTransitionTime":"2026-02-02T17:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.381860 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 17:56:02.952476243 +0000 UTC Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.427111 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.427497 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.427657 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.427897 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.428496 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:57Z","lastTransitionTime":"2026-02-02T17:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.533155 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.533222 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.533240 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.533264 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.533284 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:57Z","lastTransitionTime":"2026-02-02T17:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.636077 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.636123 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.636168 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.636184 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.636194 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:57Z","lastTransitionTime":"2026-02-02T17:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.739465 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.739526 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.739543 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.739566 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.739586 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:57Z","lastTransitionTime":"2026-02-02T17:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.843387 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.843469 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.843497 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.843530 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.843553 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:57Z","lastTransitionTime":"2026-02-02T17:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.946911 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.947009 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.947052 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.947077 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:57 crc kubenswrapper[4858]: I0202 17:15:57.947095 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:57Z","lastTransitionTime":"2026-02-02T17:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.050524 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.050588 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.050607 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.050651 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.050680 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:58Z","lastTransitionTime":"2026-02-02T17:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.153652 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.153713 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.153732 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.153757 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.153775 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:58Z","lastTransitionTime":"2026-02-02T17:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.257291 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.257333 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.257347 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.257367 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.257381 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:58Z","lastTransitionTime":"2026-02-02T17:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.360628 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.360699 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.360719 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.360743 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.360762 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:58Z","lastTransitionTime":"2026-02-02T17:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.382616 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 19:21:59.476357523 +0000 UTC Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.400052 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.400147 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.400197 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:15:58 crc kubenswrapper[4858]: E0202 17:15:58.400363 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.400429 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:15:58 crc kubenswrapper[4858]: E0202 17:15:58.400556 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:15:58 crc kubenswrapper[4858]: E0202 17:15:58.400708 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:15:58 crc kubenswrapper[4858]: E0202 17:15:58.400906 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.463666 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.463779 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.463803 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.463831 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.463852 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:58Z","lastTransitionTime":"2026-02-02T17:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.567533 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.567591 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.567610 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.567655 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.567684 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:58Z","lastTransitionTime":"2026-02-02T17:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.671318 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.671398 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.671422 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.671449 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.671470 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:58Z","lastTransitionTime":"2026-02-02T17:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.774693 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.774748 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.774778 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.774820 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.774845 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:58Z","lastTransitionTime":"2026-02-02T17:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.877470 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.877509 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.877518 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.877532 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.877541 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:58Z","lastTransitionTime":"2026-02-02T17:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.980112 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.980170 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.980194 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.980217 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:58 crc kubenswrapper[4858]: I0202 17:15:58.980234 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:58Z","lastTransitionTime":"2026-02-02T17:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.083166 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.083229 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.083250 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.083277 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.083297 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:59Z","lastTransitionTime":"2026-02-02T17:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.186196 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.186258 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.186274 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.186297 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.186316 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:59Z","lastTransitionTime":"2026-02-02T17:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.289325 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.289388 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.289406 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.289429 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.289447 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:59Z","lastTransitionTime":"2026-02-02T17:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.383778 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 21:20:21.679096843 +0000 UTC Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.392413 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.392564 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.392594 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.392629 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.392654 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:59Z","lastTransitionTime":"2026-02-02T17:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.496386 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.496471 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.496488 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.496511 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.496527 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:59Z","lastTransitionTime":"2026-02-02T17:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.599438 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.599722 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.599808 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.599871 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.599961 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:59Z","lastTransitionTime":"2026-02-02T17:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.704112 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.704173 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.704191 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.704219 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.704241 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:59Z","lastTransitionTime":"2026-02-02T17:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.807480 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.807569 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.807592 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.807622 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.807647 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:59Z","lastTransitionTime":"2026-02-02T17:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.910470 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.910541 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.910559 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.910590 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:15:59 crc kubenswrapper[4858]: I0202 17:15:59.910614 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:15:59Z","lastTransitionTime":"2026-02-02T17:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.013350 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.013668 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.013766 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.013865 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.014001 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:00Z","lastTransitionTime":"2026-02-02T17:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.116924 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.116995 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.117010 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.117034 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.117050 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:00Z","lastTransitionTime":"2026-02-02T17:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.220406 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.220455 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.220467 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.220486 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.220500 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:00Z","lastTransitionTime":"2026-02-02T17:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.323443 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.323482 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.323500 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.323519 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.323535 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:00Z","lastTransitionTime":"2026-02-02T17:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.384454 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 20:18:59.432224154 +0000 UTC Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.400302 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:16:00 crc kubenswrapper[4858]: E0202 17:16:00.400491 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.400877 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:16:00 crc kubenswrapper[4858]: E0202 17:16:00.401038 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.401109 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.401162 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:16:00 crc kubenswrapper[4858]: E0202 17:16:00.401292 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:16:00 crc kubenswrapper[4858]: E0202 17:16:00.401648 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.416601 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-t8jfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clqcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clqcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-t8jfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:00Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.426313 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.426643 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.426718 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.426782 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.426845 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:00Z","lastTransitionTime":"2026-02-02T17:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.434764 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c423d6b-08b2-46d8-886e-3e7daea27bfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d719263f90620c27bbf86fa46cc1140fd71aa3376d2c569ad8d07ff3e806ec1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9312c3670640a2f0b63e269409ac820988ce1bac655c3a63f49c02fa88afd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e7c56ee2219f48c1296c4cb2e3cc2d390921a11d0faaf18ebcf1365dff431c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370e43e433fb3658a272ea20e7cb648bcbf3c309ff85f6e430e5773e15f5aa6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370e43e433fb3658a272ea20e7cb648bcbf3c309ff85f6e430e5773e15f5aa6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:00Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.448416 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce5b4fa8ae1fbd371b3a75bf6a2b061749d1b158fbf24b60a0a6fbebeee5e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:00Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.460469 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:00Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.473173 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhk49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3c55b8-1193-47e3-ae7a-3a4b06df2884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd7f6cd899675ca4ccd2da634b9c4865505db0ac1ca0f2ee8e2c65516fd2c190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ksp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c9a3c7ca03f3b32cb484e20e016538920e05487b8c52cbe92c86e62ce30fe85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ksp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zhk49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:00Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.487828 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc5a816-d9d0-41c0-877c-250e077ef445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 17:15:14.099362 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 17:15:14.102453 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2083573188/tls.crt::/tmp/serving-cert-2083573188/tls.key\\\\\\\"\\\\nI0202 17:15:19.591422 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 17:15:19.597363 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 17:15:19.597387 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 17:15:19.597410 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 17:15:19.597416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 17:15:19.647613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 17:15:19.647820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 17:15:19.647642 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 17:15:19.647909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 17:15:19.648000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 17:15:19.648027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 17:15:19.650075 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:00Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.506451 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580328d3eb1cbccbf31e6a18c6589dd3745f01a1451a9811a838f03a79ef444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:00Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.519557 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52b7595-c2f6-4d7b-a076-db28e0332d49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d64509908882d8a14b1836f1c667e92c12f407f5108009c8a63dd841d034afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae002ad3b0e97ea0ba6f6c7a1bee332c5a1644dabbe1af5ccf1b9325356c77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe23a7d9f061976dbd948d705fa9d73bd8a5f9f01304c4d966fcc4501d84cac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:00Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.529887 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.530187 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.530868 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.530910 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.530924 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:00Z","lastTransitionTime":"2026-02-02T17:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.531601 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:00Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.544674 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:00Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.556774 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ct8b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76546fe8-0dad-45f2-aac1-2ec02ec40898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0dc2e96af3e8efe18dbe2ac824f5305bfbc9fe047645fd495f9a6ca568ae1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggp5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ct8b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:00Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.571589 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9szlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc7963e-1bdc-4038-805e-bd72fc217a13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59b9be9d88ddb7f0a4b7c29467c616216c8b14502941cc1d6e43a922d903567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9szlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:00Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.582379 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80911936318b3a9de14a9f71a7437f0ee6c9fdcb56c2f4efc568a5543a6c86e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3060eb52c9c081c561e2b97588ea0418eadc0e227307715ef71d48fb42bbf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lbvl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:00Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.604402 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce405d19-c944-4a11-8195-bca9289b8d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616f491a213328c1d78ab637e5e24b5438294b9a4d5b1db37e95f8b26cc2288d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616f491a213328c1d78ab637e5e24b5438294b9a4d5b1db37e95f8b26cc2288d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T17:15:47Z\\\",\\\"message\\\":\\\"2 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 17:15:47.451911 6522 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 17:15:47.451888 6522 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 17:15:47.452188 6522 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 17:15:47.452211 6522 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 17:15:47.452238 6522 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 17:15:47.452259 6522 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 17:15:47.452274 6522 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 17:15:47.453331 6522 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 17:15:47.453365 6522 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 17:15:47.453413 6522 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 17:15:47.453474 6522 factory.go:656] Stopping watch factory\\\\nI0202 17:15:47.453506 6522 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 17:15:47.453520 6522 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 17:15:47.453532 6522 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wkm4w_openshift-ovn-kubernetes(ce405d19-c944-4a11-8195-bca9289b8d73)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wkm4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:00Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.627062 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6803a1fe-30ea-4c74-8a7f-743c3344d829\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925eeba4bab6bf2efe66b7e9bcc09451b9dd26cb3dc457fe4432b134ed68dfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://845157900bb10cd29ec008f561118df2cbfcc30684394dbc201ad8a966aa8e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e87a8afe124154c232c26f9a43969189b865225c0162799857847f10af1257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16072ce1c24706c267a0341779f05e661bac827ae871ecc732f81923bb63bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c692114482fe72631826c6af5eb14c5ef0fdfc5500ed0ea3fd96d6009d5be562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:00Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.633569 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.633624 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.633641 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.633664 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.633679 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:00Z","lastTransitionTime":"2026-02-02T17:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.642872 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://586c9f180c983beb2187348f2e68b1a8e550805c0f8035b72785f9845b00a14c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0629fa3631c6ec611ce9a9130532c9890bab8c4f4ca83042f9d4c8f1ec990690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:00Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.662424 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b070dee1026575fc13174bc4542b19d26a65f78951e524e035b6ce8be3d21e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:00Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.673695 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6hxtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f96e711-13fa-4105-b042-45fe046d3d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca0363b669895519cf2438c9ea033dc0227b3f6fea175835586f9dc1252688c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlbtc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6hxtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:00Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.737107 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.737169 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.737187 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.737210 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.737228 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:00Z","lastTransitionTime":"2026-02-02T17:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.840110 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.840185 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.840220 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.840251 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.840271 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:00Z","lastTransitionTime":"2026-02-02T17:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.943327 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.943407 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.943432 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.943463 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:00 crc kubenswrapper[4858]: I0202 17:16:00.943514 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:00Z","lastTransitionTime":"2026-02-02T17:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.046292 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.046384 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.046411 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.046459 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.046493 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:01Z","lastTransitionTime":"2026-02-02T17:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.149360 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.149429 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.149455 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.149485 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.149505 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:01Z","lastTransitionTime":"2026-02-02T17:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.252776 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.252860 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.252884 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.252917 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.252940 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:01Z","lastTransitionTime":"2026-02-02T17:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.355816 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.356290 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.356322 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.356354 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.356379 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:01Z","lastTransitionTime":"2026-02-02T17:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.385537 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 04:41:21.058281742 +0000 UTC Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.459562 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.459938 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.460067 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.460166 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.460253 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:01Z","lastTransitionTime":"2026-02-02T17:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.562547 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.562812 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.562894 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.563007 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.563089 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:01Z","lastTransitionTime":"2026-02-02T17:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.665935 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.666000 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.666010 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.666027 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.666039 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:01Z","lastTransitionTime":"2026-02-02T17:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.768747 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.768786 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.768797 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.768813 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.768830 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:01Z","lastTransitionTime":"2026-02-02T17:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.872214 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.872516 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.872645 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.872745 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.872848 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:01Z","lastTransitionTime":"2026-02-02T17:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.975595 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.975643 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.975658 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.975679 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:01 crc kubenswrapper[4858]: I0202 17:16:01.975693 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:01Z","lastTransitionTime":"2026-02-02T17:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.078235 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.078285 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.078299 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.078320 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.078333 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:02Z","lastTransitionTime":"2026-02-02T17:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.181204 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.181266 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.181278 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.181297 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.181313 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:02Z","lastTransitionTime":"2026-02-02T17:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.284375 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.284475 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.284488 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.284507 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.284519 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:02Z","lastTransitionTime":"2026-02-02T17:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.385694 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 14:22:52.232034756 +0000 UTC Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.388191 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.388255 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.388280 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.388310 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.388336 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:02Z","lastTransitionTime":"2026-02-02T17:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.400714 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.400837 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.400774 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:16:02 crc kubenswrapper[4858]: E0202 17:16:02.401077 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.401263 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:16:02 crc kubenswrapper[4858]: E0202 17:16:02.401463 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:16:02 crc kubenswrapper[4858]: E0202 17:16:02.402335 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:16:02 crc kubenswrapper[4858]: E0202 17:16:02.402562 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.402844 4858 scope.go:117] "RemoveContainer" containerID="616f491a213328c1d78ab637e5e24b5438294b9a4d5b1db37e95f8b26cc2288d" Feb 02 17:16:02 crc kubenswrapper[4858]: E0202 17:16:02.403119 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wkm4w_openshift-ovn-kubernetes(ce405d19-c944-4a11-8195-bca9289b8d73)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.490398 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.490430 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.490437 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.490451 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.490462 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:02Z","lastTransitionTime":"2026-02-02T17:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.593223 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.593295 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.593313 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.593338 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.593356 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:02Z","lastTransitionTime":"2026-02-02T17:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.696220 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.696265 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.696273 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.696288 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.696297 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:02Z","lastTransitionTime":"2026-02-02T17:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.798805 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.798842 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.798850 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.798864 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.798872 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:02Z","lastTransitionTime":"2026-02-02T17:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.901871 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.901961 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.902020 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.902050 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:02 crc kubenswrapper[4858]: I0202 17:16:02.902075 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:02Z","lastTransitionTime":"2026-02-02T17:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.005303 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.005346 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.005357 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.005374 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.005385 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:03Z","lastTransitionTime":"2026-02-02T17:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.107529 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.108131 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.108320 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.108503 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.108695 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:03Z","lastTransitionTime":"2026-02-02T17:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.211908 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.213118 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.213307 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.213462 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.213604 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:03Z","lastTransitionTime":"2026-02-02T17:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.315888 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.315951 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.316002 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.316029 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.316046 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:03Z","lastTransitionTime":"2026-02-02T17:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.386427 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 09:54:35.862817102 +0000 UTC Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.418776 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.418812 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.418823 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.418837 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.418848 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:03Z","lastTransitionTime":"2026-02-02T17:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.522112 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.522145 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.522158 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.522173 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.522186 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:03Z","lastTransitionTime":"2026-02-02T17:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.625112 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.625190 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.625212 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.625242 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.625281 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:03Z","lastTransitionTime":"2026-02-02T17:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.728389 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.728462 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.728501 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.728533 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.728554 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:03Z","lastTransitionTime":"2026-02-02T17:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.832097 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.832163 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.832181 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.832203 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.832220 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:03Z","lastTransitionTime":"2026-02-02T17:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.934822 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.934861 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.934871 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.934885 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:03 crc kubenswrapper[4858]: I0202 17:16:03.934897 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:03Z","lastTransitionTime":"2026-02-02T17:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.038629 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.038680 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.038696 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.038714 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.038725 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:04Z","lastTransitionTime":"2026-02-02T17:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.141676 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.141709 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.141721 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.141736 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.141746 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:04Z","lastTransitionTime":"2026-02-02T17:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.244558 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.244595 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.244606 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.244622 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.244632 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:04Z","lastTransitionTime":"2026-02-02T17:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.347879 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.347934 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.347946 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.347964 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.347995 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:04Z","lastTransitionTime":"2026-02-02T17:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.387321 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 20:46:59.25222295 +0000 UTC Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.399785 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:16:04 crc kubenswrapper[4858]: E0202 17:16:04.399931 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.400071 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.400192 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:16:04 crc kubenswrapper[4858]: E0202 17:16:04.400189 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.400244 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:16:04 crc kubenswrapper[4858]: E0202 17:16:04.400307 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:16:04 crc kubenswrapper[4858]: E0202 17:16:04.400359 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.451121 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.451162 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.451171 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.451186 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.451195 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:04Z","lastTransitionTime":"2026-02-02T17:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.553896 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.553942 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.553955 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.553992 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.554007 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:04Z","lastTransitionTime":"2026-02-02T17:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.657035 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.657082 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.657093 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.657108 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.657120 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:04Z","lastTransitionTime":"2026-02-02T17:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.759152 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.759188 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.759199 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.759216 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.759227 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:04Z","lastTransitionTime":"2026-02-02T17:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.861269 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.861362 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.861378 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.861396 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.861408 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:04Z","lastTransitionTime":"2026-02-02T17:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.964122 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.964194 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.964211 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.964279 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:04 crc kubenswrapper[4858]: I0202 17:16:04.964298 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:04Z","lastTransitionTime":"2026-02-02T17:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.069152 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.069205 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.069222 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.069246 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.069264 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:05Z","lastTransitionTime":"2026-02-02T17:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.171582 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.171645 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.171664 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.171686 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.171702 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:05Z","lastTransitionTime":"2026-02-02T17:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.273942 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.273995 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.274003 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.274037 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.274046 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:05Z","lastTransitionTime":"2026-02-02T17:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.376200 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.376245 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.376253 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.376266 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.376275 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:05Z","lastTransitionTime":"2026-02-02T17:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.387560 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 07:36:23.251752896 +0000 UTC Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.478484 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.478548 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.478566 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.478590 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.478608 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:05Z","lastTransitionTime":"2026-02-02T17:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.581037 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.581097 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.581115 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.581139 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.581156 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:05Z","lastTransitionTime":"2026-02-02T17:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.683847 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.683902 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.683915 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.683933 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.683945 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:05Z","lastTransitionTime":"2026-02-02T17:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.786218 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.786274 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.786293 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.786316 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.786333 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:05Z","lastTransitionTime":"2026-02-02T17:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.888099 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.888158 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.888180 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.888204 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.888221 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:05Z","lastTransitionTime":"2026-02-02T17:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.990479 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.990519 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.990532 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.990548 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:05 crc kubenswrapper[4858]: I0202 17:16:05.990557 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:05Z","lastTransitionTime":"2026-02-02T17:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.093167 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.093230 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.093249 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.093271 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.093282 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:06Z","lastTransitionTime":"2026-02-02T17:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.195706 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.195746 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.195756 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.195774 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.195785 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:06Z","lastTransitionTime":"2026-02-02T17:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.297815 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.297846 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.297854 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.297869 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.297877 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:06Z","lastTransitionTime":"2026-02-02T17:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.388286 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 11:31:12.015930011 +0000 UTC Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.399826 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.399853 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.399871 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.399997 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.400091 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.400107 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.400115 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.400131 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:06 crc kubenswrapper[4858]: E0202 17:16:06.400122 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.400141 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:06Z","lastTransitionTime":"2026-02-02T17:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:06 crc kubenswrapper[4858]: E0202 17:16:06.400219 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:16:06 crc kubenswrapper[4858]: E0202 17:16:06.400374 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:16:06 crc kubenswrapper[4858]: E0202 17:16:06.400510 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.502601 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.502639 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.502647 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.502661 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.502670 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:06Z","lastTransitionTime":"2026-02-02T17:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.605344 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.605415 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.605439 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.605468 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.605489 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:06Z","lastTransitionTime":"2026-02-02T17:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.620874 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122-metrics-certs\") pod \"network-metrics-daemon-t8jfm\" (UID: \"8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122\") " pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:16:06 crc kubenswrapper[4858]: E0202 17:16:06.621084 4858 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 17:16:06 crc kubenswrapper[4858]: E0202 17:16:06.621144 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122-metrics-certs podName:8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122 nodeName:}" failed. No retries permitted until 2026-02-02 17:16:38.621126937 +0000 UTC m=+99.773542202 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122-metrics-certs") pod "network-metrics-daemon-t8jfm" (UID: "8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.707895 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.707943 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.707960 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.708011 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.708028 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:06Z","lastTransitionTime":"2026-02-02T17:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.810493 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.810564 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.810584 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.810730 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.810784 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:06Z","lastTransitionTime":"2026-02-02T17:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.901965 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.902023 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.902035 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.902051 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.902061 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:06Z","lastTransitionTime":"2026-02-02T17:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:06 crc kubenswrapper[4858]: E0202 17:16:06.922381 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b1513e64-30b8-48d2-874a-29d4cc9d3b3d\\\",\\\"systemUUID\\\":\\\"152e49e6-c863-4d14-b212-d4d9f0b62e1a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:06Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.925865 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.925894 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.925901 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.925912 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.925920 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:06Z","lastTransitionTime":"2026-02-02T17:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:06 crc kubenswrapper[4858]: E0202 17:16:06.939471 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b1513e64-30b8-48d2-874a-29d4cc9d3b3d\\\",\\\"systemUUID\\\":\\\"152e49e6-c863-4d14-b212-d4d9f0b62e1a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:06Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.942895 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.942931 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.942943 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.942956 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.942967 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:06Z","lastTransitionTime":"2026-02-02T17:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:06 crc kubenswrapper[4858]: E0202 17:16:06.956879 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b1513e64-30b8-48d2-874a-29d4cc9d3b3d\\\",\\\"systemUUID\\\":\\\"152e49e6-c863-4d14-b212-d4d9f0b62e1a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:06Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.961687 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.961724 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.961736 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.961751 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.961761 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:06Z","lastTransitionTime":"2026-02-02T17:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:06 crc kubenswrapper[4858]: E0202 17:16:06.975449 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b1513e64-30b8-48d2-874a-29d4cc9d3b3d\\\",\\\"systemUUID\\\":\\\"152e49e6-c863-4d14-b212-d4d9f0b62e1a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:06Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.979248 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.979280 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.979290 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.979302 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.979310 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:06Z","lastTransitionTime":"2026-02-02T17:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:06 crc kubenswrapper[4858]: E0202 17:16:06.992539 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b1513e64-30b8-48d2-874a-29d4cc9d3b3d\\\",\\\"systemUUID\\\":\\\"152e49e6-c863-4d14-b212-d4d9f0b62e1a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:06Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:06 crc kubenswrapper[4858]: E0202 17:16:06.992709 4858 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.994144 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.994178 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.994189 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.994203 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:06 crc kubenswrapper[4858]: I0202 17:16:06.994213 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:06Z","lastTransitionTime":"2026-02-02T17:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.096286 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.096321 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.096329 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.096343 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.096352 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:07Z","lastTransitionTime":"2026-02-02T17:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.199346 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.199379 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.199390 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.199407 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.199419 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:07Z","lastTransitionTime":"2026-02-02T17:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.302352 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.302452 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.302463 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.302476 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.302488 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:07Z","lastTransitionTime":"2026-02-02T17:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.389106 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 08:11:18.636501984 +0000 UTC Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.404749 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.404793 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.404802 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.404817 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.404826 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:07Z","lastTransitionTime":"2026-02-02T17:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.506734 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.506766 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.506775 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.506788 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.506798 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:07Z","lastTransitionTime":"2026-02-02T17:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.609881 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.609937 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.609953 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.610016 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.610051 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:07Z","lastTransitionTime":"2026-02-02T17:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.712424 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.712470 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.712481 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.712509 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.712520 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:07Z","lastTransitionTime":"2026-02-02T17:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.814726 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.814760 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.814768 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.814782 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.814804 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:07Z","lastTransitionTime":"2026-02-02T17:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.917030 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.917075 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.917087 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.917105 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:07 crc kubenswrapper[4858]: I0202 17:16:07.917117 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:07Z","lastTransitionTime":"2026-02-02T17:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.018908 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.018999 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.019017 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.019044 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.019060 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:08Z","lastTransitionTime":"2026-02-02T17:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.121303 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.121336 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.121348 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.121364 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.121376 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:08Z","lastTransitionTime":"2026-02-02T17:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.223783 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.223818 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.223830 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.223850 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.223864 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:08Z","lastTransitionTime":"2026-02-02T17:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.326237 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.327114 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.327163 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.327205 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.327223 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:08Z","lastTransitionTime":"2026-02-02T17:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.390282 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 11:56:01.160897207 +0000 UTC Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.402129 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.402177 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:16:08 crc kubenswrapper[4858]: E0202 17:16:08.402239 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.402265 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:16:08 crc kubenswrapper[4858]: E0202 17:16:08.402405 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:16:08 crc kubenswrapper[4858]: E0202 17:16:08.402491 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.402556 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:16:08 crc kubenswrapper[4858]: E0202 17:16:08.402599 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.428841 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.428862 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.428870 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.428884 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.428894 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:08Z","lastTransitionTime":"2026-02-02T17:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.530933 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.530988 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.531000 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.531017 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.531028 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:08Z","lastTransitionTime":"2026-02-02T17:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.633109 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.633161 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.633170 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.633185 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.633194 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:08Z","lastTransitionTime":"2026-02-02T17:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.735672 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.735705 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.735716 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.735733 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.735745 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:08Z","lastTransitionTime":"2026-02-02T17:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.838462 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.838803 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.838943 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.839121 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.839264 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:08Z","lastTransitionTime":"2026-02-02T17:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.941937 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.942018 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.942037 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.942060 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:08 crc kubenswrapper[4858]: I0202 17:16:08.942077 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:08Z","lastTransitionTime":"2026-02-02T17:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.044225 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.044246 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.044254 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.044265 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.044272 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:09Z","lastTransitionTime":"2026-02-02T17:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.147125 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.147159 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.147169 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.147185 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.147196 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:09Z","lastTransitionTime":"2026-02-02T17:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.249252 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.249600 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.249672 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.249740 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.249801 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:09Z","lastTransitionTime":"2026-02-02T17:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.352631 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.352657 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.352665 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.352678 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.352686 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:09Z","lastTransitionTime":"2026-02-02T17:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.390571 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 09:55:15.794270504 +0000 UTC Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.455039 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.455105 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.455124 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.455150 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.455169 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:09Z","lastTransitionTime":"2026-02-02T17:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.557809 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.557846 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.557861 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.557877 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.557887 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:09Z","lastTransitionTime":"2026-02-02T17:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.660100 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.660129 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.660137 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.660150 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.660158 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:09Z","lastTransitionTime":"2026-02-02T17:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.762217 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.762246 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.762254 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.762267 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.762275 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:09Z","lastTransitionTime":"2026-02-02T17:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.849227 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9szlc_4bc7963e-1bdc-4038-805e-bd72fc217a13/kube-multus/0.log" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.849291 4858 generic.go:334] "Generic (PLEG): container finished" podID="4bc7963e-1bdc-4038-805e-bd72fc217a13" containerID="a59b9be9d88ddb7f0a4b7c29467c616216c8b14502941cc1d6e43a922d903567" exitCode=1 Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.849329 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9szlc" event={"ID":"4bc7963e-1bdc-4038-805e-bd72fc217a13","Type":"ContainerDied","Data":"a59b9be9d88ddb7f0a4b7c29467c616216c8b14502941cc1d6e43a922d903567"} Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.849699 4858 scope.go:117] "RemoveContainer" containerID="a59b9be9d88ddb7f0a4b7c29467c616216c8b14502941cc1d6e43a922d903567" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.865820 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.865858 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.865867 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.865881 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.865891 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:09Z","lastTransitionTime":"2026-02-02T17:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.879741 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce405d19-c944-4a11-8195-bca9289b8d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616f491a213328c1d78ab637e5e24b5438294b9a4d5b1db37e95f8b26cc2288d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616f491a213328c1d78ab637e5e24b5438294b9a4d5b1db37e95f8b26cc2288d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T17:15:47Z\\\",\\\"message\\\":\\\"2 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 17:15:47.451911 6522 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 17:15:47.451888 6522 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 17:15:47.452188 6522 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 17:15:47.452211 6522 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 17:15:47.452238 6522 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 17:15:47.452259 6522 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 17:15:47.452274 6522 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 17:15:47.453331 6522 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 17:15:47.453365 6522 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 17:15:47.453413 6522 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 17:15:47.453474 6522 factory.go:656] Stopping watch factory\\\\nI0202 17:15:47.453506 6522 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 17:15:47.453520 6522 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 17:15:47.453532 6522 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wkm4w_openshift-ovn-kubernetes(ce405d19-c944-4a11-8195-bca9289b8d73)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wkm4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:09Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.899490 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6803a1fe-30ea-4c74-8a7f-743c3344d829\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925eeba4bab6bf2efe66b7e9bcc09451b9dd26cb3dc457fe4432b134ed68dfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://845157900bb10cd29ec008f561118df2cbfcc30684394dbc201ad8a966aa8e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e87a8afe124154c232c26f9a43969189b865225c0162799857847f10af1257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16072ce1c24706c267a0341779f05e661bac827ae871ecc732f81923bb63bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c692114482fe72631826c6af5eb14c5ef0fdfc5500ed0ea3fd96d6009d5be562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:09Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.911728 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://586c9f180c983beb2187348f2e68b1a8e550805c0f8035b72785f9845b00a14c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0629fa3631c6ec611ce9a9130532c9890bab8c4f4ca83042f9d4c8f1ec990690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:09Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.923751 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b070dee1026575fc13174bc4542b19d26a65f78951e524e035b6ce8be3d21e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:09Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.936686 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6hxtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f96e711-13fa-4105-b042-45fe046d3d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca0363b669895519cf2438c9ea033dc0227b3f6fea175835586f9dc1252688c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlbtc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6hxtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:09Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.948624 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ct8b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76546fe8-0dad-45f2-aac1-2ec02ec40898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0dc2e96af3e8efe18dbe2ac824f5305bfbc9fe047645fd495f9a6ca568ae1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggp5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ct8b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:09Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.966260 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9szlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc7963e-1bdc-4038-805e-bd72fc217a13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59b9be9d88ddb7f0a4b7c29467c616216c8b14502941cc1d6e43a922d903567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59b9be9d88ddb7f0a4b7c29467c616216c8b14502941cc1d6e43a922d903567\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T17:16:09Z\\\",\\\"message\\\":\\\"2026-02-02T17:15:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b3dbb378-edad-457f-9c64-6bd6330f05ec\\\\n2026-02-02T17:15:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b3dbb378-edad-457f-9c64-6bd6330f05ec to /host/opt/cni/bin/\\\\n2026-02-02T17:15:24Z [verbose] multus-daemon started\\\\n2026-02-02T17:15:24Z [verbose] Readiness Indicator file check\\\\n2026-02-02T17:16:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9szlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:09Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.968755 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.968802 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.968818 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.968839 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.968857 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:09Z","lastTransitionTime":"2026-02-02T17:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.977879 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80911936318b3a9de14a9f71a7437f0ee6c9fdcb56c2f4efc568a5543a6c86e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3060eb52c9c081c561e2b97588ea0418eadc0e227307715ef71d48fb42bbf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lbvl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:09Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:09 crc kubenswrapper[4858]: I0202 17:16:09.988274 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c423d6b-08b2-46d8-886e-3e7daea27bfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d719263f90620c27bbf86fa46cc1140fd71aa3376d2c569ad8d07ff3e806ec1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9312c3670640a2f0b63e269409ac820988ce1bac655c3a63f49c02fa88afd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e7c56ee2219f48c1296c4cb2e3cc2d390921a11d0faaf18ebcf1365dff431c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370e43e433fb3658a272ea20e7cb648bcbf3c309ff85f6e430e5773e15f5aa6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370e43e433fb3658a272ea20e7cb648bcbf3c309ff85f6e430e5773e15f5aa6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:09Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.001538 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce5b4fa8ae1fbd371b3a75bf6a2b061749d1b158fbf24b60a0a6fbebeee5e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:10Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.012866 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:10Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.025075 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhk49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3c55b8-1193-47e3-ae7a-3a4b06df2884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd7f6cd899675ca4ccd2da634b9c4865505db0ac1ca0f2ee8e2c65516fd2c190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ksp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c9a3c7ca03f3b32cb484e20e016538920e05487b8c52cbe92c86e62ce30fe85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ksp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zhk49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:10Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.037107 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-t8jfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clqcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clqcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-t8jfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:10Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.049405 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc5a816-d9d0-41c0-877c-250e077ef445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 17:15:14.099362 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 17:15:14.102453 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2083573188/tls.crt::/tmp/serving-cert-2083573188/tls.key\\\\\\\"\\\\nI0202 17:15:19.591422 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 17:15:19.597363 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 17:15:19.597387 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 17:15:19.597410 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 17:15:19.597416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 17:15:19.647613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 17:15:19.647820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 17:15:19.647642 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 17:15:19.647909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 17:15:19.648000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 17:15:19.648027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 17:15:19.650075 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:10Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.062122 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580328d3eb1cbccbf31e6a18c6589dd3745f01a1451a9811a838f03a79ef444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:10Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.072767 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.072819 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.072828 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.072844 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.072853 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:10Z","lastTransitionTime":"2026-02-02T17:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.075294 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52b7595-c2f6-4d7b-a076-db28e0332d49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d64509908882d8a14b1836f1c667e92c12f407f5108009c8a63dd841d034afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae002ad3b0e97ea0ba6f6c7a1bee332c5a1644dabbe1af5ccf1b9325356c77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe23a7d9f061976dbd948d705fa9d73bd8a5f9f01304c4d966fcc4501d84cac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:10Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.086688 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:10Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.097505 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:10Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.174967 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.175047 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.175064 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.175083 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.175095 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:10Z","lastTransitionTime":"2026-02-02T17:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.277906 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.277951 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.277965 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.277997 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.278010 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:10Z","lastTransitionTime":"2026-02-02T17:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.380692 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.380746 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.380762 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.380785 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.380803 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:10Z","lastTransitionTime":"2026-02-02T17:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.391036 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 01:15:24.667450869 +0000 UTC Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.400411 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.400461 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:16:10 crc kubenswrapper[4858]: E0202 17:16:10.400557 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.400600 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.400597 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:16:10 crc kubenswrapper[4858]: E0202 17:16:10.400783 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:16:10 crc kubenswrapper[4858]: E0202 17:16:10.401016 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:16:10 crc kubenswrapper[4858]: E0202 17:16:10.401131 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.414131 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52b7595-c2f6-4d7b-a076-db28e0332d49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d64509908882d8a14b1836f1c667e92c12f407f5108009c8a63dd841d034afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae002ad3b0e97ea0ba6f6c7a1bee332c5a1644dabbe1af5ccf1b9325356c77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe23a7d9f061976dbd948d705fa9d73bd8a5f9f01304c4d966fcc4501d84cac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:10Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.425050 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:10Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.438087 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:10Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.459113 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce405d19-c944-4a11-8195-bca9289b8d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616f491a213328c1d78ab637e5e24b5438294b9a4d5b1db37e95f8b26cc2288d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616f491a213328c1d78ab637e5e24b5438294b9a4d5b1db37e95f8b26cc2288d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T17:15:47Z\\\",\\\"message\\\":\\\"2 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 17:15:47.451911 6522 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 17:15:47.451888 6522 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 17:15:47.452188 6522 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 17:15:47.452211 6522 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 17:15:47.452238 6522 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 17:15:47.452259 6522 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 17:15:47.452274 6522 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 17:15:47.453331 6522 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 17:15:47.453365 6522 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 17:15:47.453413 6522 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 17:15:47.453474 6522 factory.go:656] Stopping watch factory\\\\nI0202 17:15:47.453506 6522 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 17:15:47.453520 6522 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 17:15:47.453532 6522 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wkm4w_openshift-ovn-kubernetes(ce405d19-c944-4a11-8195-bca9289b8d73)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wkm4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:10Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.483274 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.483505 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.483614 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.483699 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.483782 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:10Z","lastTransitionTime":"2026-02-02T17:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.487228 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6803a1fe-30ea-4c74-8a7f-743c3344d829\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925eeba4bab6bf2efe66b7e9bcc09451b9dd26cb3dc457fe4432b134ed68dfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://845157900bb10cd29ec008f561118df2cbfcc30684394dbc201ad8a966aa8e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e87a8afe124154c232c26f9a43969189b865225c0162799857847f10af1257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16072ce1c24706c267a0341779f05e661bac827ae871ecc732f81923bb63bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c692114482fe72631826c6af5eb14c5ef0fdfc5500ed0ea3fd96d6009d5be562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:10Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.505754 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://586c9f180c983beb2187348f2e68b1a8e550805c0f8035b72785f9845b00a14c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0629fa3631c6ec611ce9a9130532c9890bab8c4f4ca83042f9d4c8f1ec990690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:10Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.518297 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b070dee1026575fc13174bc4542b19d26a65f78951e524e035b6ce8be3d21e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:10Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.530245 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6hxtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f96e711-13fa-4105-b042-45fe046d3d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca0363b669895519cf2438c9ea033dc0227b3f6fea175835586f9dc1252688c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlbtc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6hxtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:10Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.543961 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ct8b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76546fe8-0dad-45f2-aac1-2ec02ec40898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0dc2e96af3e8efe18dbe2ac824f5305bfbc9fe047645fd495f9a6ca568ae1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggp5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ct8b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:10Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.558351 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9szlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc7963e-1bdc-4038-805e-bd72fc217a13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59b9be9d88ddb7f0a4b7c29467c616216c8b14502941cc1d6e43a922d903567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59b9be9d88ddb7f0a4b7c29467c616216c8b14502941cc1d6e43a922d903567\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T17:16:09Z\\\",\\\"message\\\":\\\"2026-02-02T17:15:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b3dbb378-edad-457f-9c64-6bd6330f05ec\\\\n2026-02-02T17:15:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b3dbb378-edad-457f-9c64-6bd6330f05ec to /host/opt/cni/bin/\\\\n2026-02-02T17:15:24Z [verbose] multus-daemon started\\\\n2026-02-02T17:15:24Z [verbose] Readiness Indicator file check\\\\n2026-02-02T17:16:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9szlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:10Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.570462 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80911936318b3a9de14a9f71a7437f0ee6c9fdcb56c2f4efc568a5543a6c86e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3060eb52c9c081c561e2b97588ea0418eadc0e227307715ef71d48fb42bbf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lbvl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:10Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.580755 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c423d6b-08b2-46d8-886e-3e7daea27bfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d719263f90620c27bbf86fa46cc1140fd71aa3376d2c569ad8d07ff3e806ec1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9312c3670640a2f0b63e269409ac820988ce1bac655c3a63f49c02fa88afd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e7c56ee2219f48c1296c4cb2e3cc2d390921a11d0faaf18ebcf1365dff431c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370e43e433fb3658a272ea20e7cb648bcbf3c309ff85f6e430e5773e15f5aa6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370e43e433fb3658a272ea20e7cb648bcbf3c309ff85f6e430e5773e15f5aa6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:10Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.589322 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.589360 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.589374 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.589390 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.589401 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:10Z","lastTransitionTime":"2026-02-02T17:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.592324 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce5b4fa8ae1fbd371b3a75bf6a2b061749d1b158fbf24b60a0a6fbebeee5e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:10Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.604664 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:10Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.615716 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhk49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3c55b8-1193-47e3-ae7a-3a4b06df2884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd7f6cd899675ca4ccd2da634b9c4865505db0ac1ca0f2ee8e2c65516fd2c190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ksp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c9a3c7ca03f3b32cb484e20e016538920e05487b8c52cbe92c86e62ce30fe85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ksp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zhk49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:10Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.625068 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-t8jfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clqcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clqcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-t8jfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:10Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.639320 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc5a816-d9d0-41c0-877c-250e077ef445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 17:15:14.099362 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 17:15:14.102453 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2083573188/tls.crt::/tmp/serving-cert-2083573188/tls.key\\\\\\\"\\\\nI0202 17:15:19.591422 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 17:15:19.597363 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 17:15:19.597387 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 17:15:19.597410 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 17:15:19.597416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 17:15:19.647613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 17:15:19.647820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 17:15:19.647642 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 17:15:19.647909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 17:15:19.648000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 17:15:19.648027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 17:15:19.650075 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:10Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.654557 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580328d3eb1cbccbf31e6a18c6589dd3745f01a1451a9811a838f03a79ef444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:10Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.691537 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.691708 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.691776 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.691836 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.691893 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:10Z","lastTransitionTime":"2026-02-02T17:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.794691 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.794883 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.795113 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.795321 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.795502 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:10Z","lastTransitionTime":"2026-02-02T17:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.854070 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9szlc_4bc7963e-1bdc-4038-805e-bd72fc217a13/kube-multus/0.log" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.854145 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9szlc" event={"ID":"4bc7963e-1bdc-4038-805e-bd72fc217a13","Type":"ContainerStarted","Data":"485b3125d8343c8afd6e5d3b756e0a924c75da1d89c5e699ff825a8b46957bb7"} Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.870960 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc5a816-d9d0-41c0-877c-250e077ef445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 17:15:14.099362 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 17:15:14.102453 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2083573188/tls.crt::/tmp/serving-cert-2083573188/tls.key\\\\\\\"\\\\nI0202 17:15:19.591422 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 17:15:19.597363 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 17:15:19.597387 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 17:15:19.597410 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 17:15:19.597416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 17:15:19.647613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 17:15:19.647820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 17:15:19.647642 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 17:15:19.647909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 17:15:19.648000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 17:15:19.648027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 17:15:19.650075 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:10Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.884618 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580328d3eb1cbccbf31e6a18c6589dd3745f01a1451a9811a838f03a79ef444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:10Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.896377 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52b7595-c2f6-4d7b-a076-db28e0332d49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d64509908882d8a14b1836f1c667e92c12f407f5108009c8a63dd841d034afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae002ad3b0e97ea0ba6f6c7a1bee332c5a1644dabbe1af5ccf1b9325356c77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe23a7d9f061976dbd948d705fa9d73bd8a5f9f01304c4d966fcc4501d84cac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:10Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.899249 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.899289 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.899299 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.899313 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.899323 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:10Z","lastTransitionTime":"2026-02-02T17:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.909323 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:10Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.919480 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:10Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.929138 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9szlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc7963e-1bdc-4038-805e-bd72fc217a13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485b3125d8343c8afd6e5d3b756e0a924c75da1d89c5e699ff825a8b46957bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59b9be9d88ddb7f0a4b7c29467c616216c8b14502941cc1d6e43a922d903567\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T17:16:09Z\\\",\\\"message\\\":\\\"2026-02-02T17:15:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b3dbb378-edad-457f-9c64-6bd6330f05ec\\\\n2026-02-02T17:15:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b3dbb378-edad-457f-9c64-6bd6330f05ec to /host/opt/cni/bin/\\\\n2026-02-02T17:15:24Z [verbose] multus-daemon started\\\\n2026-02-02T17:15:24Z [verbose] Readiness Indicator file check\\\\n2026-02-02T17:16:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:16:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9szlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:10Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.942452 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80911936318b3a9de14a9f71a7437f0ee6c9fdcb56c2f4efc568a5543a6c86e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3060eb52c9c081c561e2b97588ea0418eadc0e227307715ef71d48fb42bbf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lbvl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:10Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.959767 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce405d19-c944-4a11-8195-bca9289b8d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://616f491a213328c1d78ab637e5e24b5438294b9a4d5b1db37e95f8b26cc2288d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616f491a213328c1d78ab637e5e24b5438294b9a4d5b1db37e95f8b26cc2288d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T17:15:47Z\\\",\\\"message\\\":\\\"2 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 17:15:47.451911 6522 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 17:15:47.451888 6522 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 17:15:47.452188 6522 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 17:15:47.452211 6522 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 17:15:47.452238 6522 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 17:15:47.452259 6522 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 17:15:47.452274 6522 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 17:15:47.453331 6522 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 17:15:47.453365 6522 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 17:15:47.453413 6522 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 17:15:47.453474 6522 factory.go:656] Stopping watch factory\\\\nI0202 17:15:47.453506 6522 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 17:15:47.453520 6522 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 17:15:47.453532 6522 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-wkm4w_openshift-ovn-kubernetes(ce405d19-c944-4a11-8195-bca9289b8d73)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wkm4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:10Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.976718 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6803a1fe-30ea-4c74-8a7f-743c3344d829\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925eeba4bab6bf2efe66b7e9bcc09451b9dd26cb3dc457fe4432b134ed68dfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://845157900bb10cd29ec008f561118df2cbfcc30684394dbc201ad8a966aa8e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e87a8afe124154c232c26f9a43969189b865225c0162799857847f10af1257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16072ce1c24706c267a0341779f05e661bac827ae871ecc732f81923bb63bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c692114482fe72631826c6af5eb14c5ef0fdfc5500ed0ea3fd96d6009d5be562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:10Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.988285 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://586c9f180c983beb2187348f2e68b1a8e550805c0f8035b72785f9845b00a14c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0629fa3631c6ec611ce9a9130532c9890bab8c4f4ca83042f9d4c8f1ec990690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:10Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:10 crc kubenswrapper[4858]: I0202 17:16:10.998164 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b070dee1026575fc13174bc4542b19d26a65f78951e524e035b6ce8be3d21e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:10Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.003842 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.003886 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.003898 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.003917 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.003930 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:11Z","lastTransitionTime":"2026-02-02T17:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.006719 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6hxtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f96e711-13fa-4105-b042-45fe046d3d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca0363b669895519cf2438c9ea033dc0227b3f6fea175835586f9dc1252688c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlbtc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6hxtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:11Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.014319 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ct8b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76546fe8-0dad-45f2-aac1-2ec02ec40898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0dc2e96af3e8efe18dbe2ac824f5305bfbc9fe047645fd495f9a6ca568ae1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggp5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ct8b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:11Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.024100 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c423d6b-08b2-46d8-886e-3e7daea27bfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d719263f90620c27bbf86fa46cc1140fd71aa3376d2c569ad8d07ff3e806ec1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9312c3670640a2f0b63e269409ac820988ce1bac655c3a63f49c02fa88afd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e7c56ee2219f48c1296c4cb2e3cc2d390921a11d0faaf18ebcf1365dff431c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370e43e433fb3658a272ea20e7cb648bcbf3c309ff85f6e430e5773e15f5aa6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370e43e433fb3658a272ea20e7cb648bcbf3c309ff85f6e430e5773e15f5aa6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:11Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.034707 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce5b4fa8ae1fbd371b3a75bf6a2b061749d1b158fbf24b60a0a6fbebeee5e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:11Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.045251 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:11Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.054871 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhk49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3c55b8-1193-47e3-ae7a-3a4b06df2884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd7f6cd899675ca4ccd2da634b9c4865505db0ac1ca0f2ee8e2c65516fd2c190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ksp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c9a3c7ca03f3b32cb484e20e016538920e05487b8c52cbe92c86e62ce30fe85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ksp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zhk49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:11Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.064383 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-t8jfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clqcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clqcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-t8jfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:11Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.106165 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.106204 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.106213 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.106230 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.106240 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:11Z","lastTransitionTime":"2026-02-02T17:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.208632 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.208666 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.208678 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.208692 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.208700 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:11Z","lastTransitionTime":"2026-02-02T17:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.310423 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.310473 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.310484 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.310499 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.310510 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:11Z","lastTransitionTime":"2026-02-02T17:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.392173 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 11:55:51.78184183 +0000 UTC Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.412714 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.412749 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.412759 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.412772 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.412785 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:11Z","lastTransitionTime":"2026-02-02T17:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.516219 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.516261 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.516270 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.516284 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.516295 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:11Z","lastTransitionTime":"2026-02-02T17:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.618916 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.619286 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.619298 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.619316 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.619328 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:11Z","lastTransitionTime":"2026-02-02T17:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.721614 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.721664 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.721680 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.721706 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.721722 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:11Z","lastTransitionTime":"2026-02-02T17:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.824257 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.824604 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.824739 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.824857 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.825015 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:11Z","lastTransitionTime":"2026-02-02T17:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.927894 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.928238 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.928378 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.928516 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:11 crc kubenswrapper[4858]: I0202 17:16:11.928660 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:11Z","lastTransitionTime":"2026-02-02T17:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.031251 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.031584 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.031735 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.031887 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.032046 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:12Z","lastTransitionTime":"2026-02-02T17:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.134206 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.134477 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.134598 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.134694 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.134777 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:12Z","lastTransitionTime":"2026-02-02T17:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.237411 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.237793 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.238017 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.238081 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.238098 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:12Z","lastTransitionTime":"2026-02-02T17:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.340807 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.340843 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.340852 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.340867 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.340876 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:12Z","lastTransitionTime":"2026-02-02T17:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.392767 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 07:30:01.756789668 +0000 UTC Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.400196 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:16:12 crc kubenswrapper[4858]: E0202 17:16:12.400357 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.400394 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.400472 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:16:12 crc kubenswrapper[4858]: E0202 17:16:12.400594 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.400656 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:16:12 crc kubenswrapper[4858]: E0202 17:16:12.400683 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:16:12 crc kubenswrapper[4858]: E0202 17:16:12.400833 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.442997 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.443057 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.443075 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.443097 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.443112 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:12Z","lastTransitionTime":"2026-02-02T17:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.546520 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.546597 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.546619 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.546654 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.546678 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:12Z","lastTransitionTime":"2026-02-02T17:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.649507 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.649547 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.649557 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.649574 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.649585 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:12Z","lastTransitionTime":"2026-02-02T17:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.752632 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.752698 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.752710 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.752735 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.752746 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:12Z","lastTransitionTime":"2026-02-02T17:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.855476 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.855516 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.855529 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.855550 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.855561 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:12Z","lastTransitionTime":"2026-02-02T17:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.958157 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.958202 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.958215 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.958234 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:12 crc kubenswrapper[4858]: I0202 17:16:12.958246 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:12Z","lastTransitionTime":"2026-02-02T17:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.061319 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.061388 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.061408 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.061434 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.061453 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:13Z","lastTransitionTime":"2026-02-02T17:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.164663 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.164722 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.164739 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.164762 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.164779 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:13Z","lastTransitionTime":"2026-02-02T17:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.267733 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.267792 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.267809 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.267831 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.267846 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:13Z","lastTransitionTime":"2026-02-02T17:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.371740 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.371801 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.371818 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.371841 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.371859 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:13Z","lastTransitionTime":"2026-02-02T17:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.393501 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 10:39:12.165349783 +0000 UTC Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.474714 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.474771 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.474789 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.474814 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.474835 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:13Z","lastTransitionTime":"2026-02-02T17:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.579052 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.579091 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.579105 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.579121 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.579132 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:13Z","lastTransitionTime":"2026-02-02T17:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.681848 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.681919 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.681941 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.681970 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.682043 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:13Z","lastTransitionTime":"2026-02-02T17:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.784453 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.784490 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.784501 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.784518 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.784530 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:13Z","lastTransitionTime":"2026-02-02T17:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.887279 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.887312 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.887321 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.887337 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.887346 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:13Z","lastTransitionTime":"2026-02-02T17:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.990347 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.990425 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.990447 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.990474 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:13 crc kubenswrapper[4858]: I0202 17:16:13.990493 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:13Z","lastTransitionTime":"2026-02-02T17:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.093222 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.093272 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.093287 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.093310 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.093330 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:14Z","lastTransitionTime":"2026-02-02T17:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.196339 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.196418 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.196435 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.196457 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.196474 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:14Z","lastTransitionTime":"2026-02-02T17:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.394611 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 05:53:53.57794703 +0000 UTC Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.409963 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:16:14 crc kubenswrapper[4858]: E0202 17:16:14.410117 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.410895 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.411072 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:16:14 crc kubenswrapper[4858]: E0202 17:16:14.411171 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:16:14 crc kubenswrapper[4858]: E0202 17:16:14.411408 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.410850 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:16:14 crc kubenswrapper[4858]: E0202 17:16:14.411658 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.412140 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.412189 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.412208 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.412231 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.412253 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:14Z","lastTransitionTime":"2026-02-02T17:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.516740 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.517141 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.517355 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.517504 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.517644 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:14Z","lastTransitionTime":"2026-02-02T17:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.620166 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.620238 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.620257 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.620284 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.620328 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:14Z","lastTransitionTime":"2026-02-02T17:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.723348 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.723417 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.723437 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.723461 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.723481 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:14Z","lastTransitionTime":"2026-02-02T17:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.826586 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.826843 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.826925 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.827024 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.827140 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:14Z","lastTransitionTime":"2026-02-02T17:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.930009 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.930294 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.930444 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.930534 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:14 crc kubenswrapper[4858]: I0202 17:16:14.930610 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:14Z","lastTransitionTime":"2026-02-02T17:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.034150 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.034220 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.034245 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.034277 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.034296 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:15Z","lastTransitionTime":"2026-02-02T17:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.138621 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.138666 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.138679 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.138698 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.138712 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:15Z","lastTransitionTime":"2026-02-02T17:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.241161 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.241231 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.241261 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.241286 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.241306 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:15Z","lastTransitionTime":"2026-02-02T17:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.344196 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.344247 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.344485 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.344511 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.344528 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:15Z","lastTransitionTime":"2026-02-02T17:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.395257 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 03:26:15.612559557 +0000 UTC Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.400954 4858 scope.go:117] "RemoveContainer" containerID="616f491a213328c1d78ab637e5e24b5438294b9a4d5b1db37e95f8b26cc2288d" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.448039 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.448091 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.448109 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.448132 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.448150 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:15Z","lastTransitionTime":"2026-02-02T17:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.550369 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.550400 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.550410 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.550423 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.550433 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:15Z","lastTransitionTime":"2026-02-02T17:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.654040 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.654085 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.654102 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.654124 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.654141 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:15Z","lastTransitionTime":"2026-02-02T17:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.757353 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.757414 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.757438 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.757465 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.757485 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:15Z","lastTransitionTime":"2026-02-02T17:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.861846 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.861889 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.861901 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.861933 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.861947 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:15Z","lastTransitionTime":"2026-02-02T17:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.876627 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wkm4w_ce405d19-c944-4a11-8195-bca9289b8d73/ovnkube-controller/2.log" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.879137 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" event={"ID":"ce405d19-c944-4a11-8195-bca9289b8d73","Type":"ContainerStarted","Data":"b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6"} Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.880289 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.896100 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:15Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.912619 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:15Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.928908 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52b7595-c2f6-4d7b-a076-db28e0332d49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d64509908882d8a14b1836f1c667e92c12f407f5108009c8a63dd841d034afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae002ad3b0e97ea0ba6f6c7a1bee332c5a1644dabbe1af5ccf1b9325356c77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe23a7d9f061976dbd948d705fa9d73bd8a5f9f01304c4d966fcc4501d84cac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:15Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.942290 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://586c9f180c983beb2187348f2e68b1a8e550805c0f8035b72785f9845b00a14c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0629fa3631c6ec611ce9a9130532c9890bab8c4f4ca83042f9d4c8f1ec990690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:15Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.957250 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b070dee1026575fc13174bc4542b19d26a65f78951e524e035b6ce8be3d21e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:15Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.967007 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.967045 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.967056 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.967072 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.967085 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:15Z","lastTransitionTime":"2026-02-02T17:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:15 crc kubenswrapper[4858]: I0202 17:16:15.974427 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6hxtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f96e711-13fa-4105-b042-45fe046d3d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca0363b669895519cf2438c9ea033dc0227b3f6fea175835586f9dc1252688c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlbtc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6hxtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:15Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.001259 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ct8b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76546fe8-0dad-45f2-aac1-2ec02ec40898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0dc2e96af3e8efe18dbe2ac824f5305bfbc9fe047645fd495f9a6ca568ae1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggp5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ct8b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:15Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.026081 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9szlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc7963e-1bdc-4038-805e-bd72fc217a13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485b3125d8343c8afd6e5d3b756e0a924c75da1d89c5e699ff825a8b46957bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59b9be9d88ddb7f0a4b7c29467c616216c8b14502941cc1d6e43a922d903567\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T17:16:09Z\\\",\\\"message\\\":\\\"2026-02-02T17:15:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b3dbb378-edad-457f-9c64-6bd6330f05ec\\\\n2026-02-02T17:15:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b3dbb378-edad-457f-9c64-6bd6330f05ec to /host/opt/cni/bin/\\\\n2026-02-02T17:15:24Z [verbose] multus-daemon started\\\\n2026-02-02T17:15:24Z [verbose] Readiness Indicator file check\\\\n2026-02-02T17:16:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:16:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9szlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:16Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.043651 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80911936318b3a9de14a9f71a7437f0ee6c9fdcb56c2f4efc568a5543a6c86e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3060eb52c9c081c561e2b97588ea0418eadc0e227307715ef71d48fb42bbf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lbvl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:16Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.069508 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.069538 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.069546 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.069559 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.069569 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:16Z","lastTransitionTime":"2026-02-02T17:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.079314 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce405d19-c944-4a11-8195-bca9289b8d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616f491a213328c1d78ab637e5e24b5438294b9a4d5b1db37e95f8b26cc2288d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T17:15:47Z\\\",\\\"message\\\":\\\"2 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 17:15:47.451911 6522 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 17:15:47.451888 6522 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 17:15:47.452188 6522 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 17:15:47.452211 6522 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 17:15:47.452238 6522 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 17:15:47.452259 6522 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 17:15:47.452274 6522 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 17:15:47.453331 6522 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 17:15:47.453365 6522 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 17:15:47.453413 6522 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 17:15:47.453474 6522 factory.go:656] Stopping watch factory\\\\nI0202 17:15:47.453506 6522 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 17:15:47.453520 6522 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 17:15:47.453532 6522 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wkm4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:16Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.117148 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6803a1fe-30ea-4c74-8a7f-743c3344d829\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925eeba4bab6bf2efe66b7e9bcc09451b9dd26cb3dc457fe4432b134ed68dfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://845157900bb10cd29ec008f561118df2cbfcc30684394dbc201ad8a966aa8e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e87a8afe124154c232c26f9a43969189b865225c0162799857847f10af1257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16072ce1c24706c267a0341779f05e661bac827ae871ecc732f81923bb63bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c692114482fe72631826c6af5eb14c5ef0fdfc5500ed0ea3fd96d6009d5be562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:16Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.135315 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce5b4fa8ae1fbd371b3a75bf6a2b061749d1b158fbf24b60a0a6fbebeee5e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:16Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.157000 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:16Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.172537 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.172581 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.172595 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.172613 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.172627 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:16Z","lastTransitionTime":"2026-02-02T17:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.174901 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhk49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3c55b8-1193-47e3-ae7a-3a4b06df2884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd7f6cd899675ca4ccd2da634b9c4865505db0ac1ca0f2ee8e2c65516fd2c190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ksp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c9a3c7ca03f3b32cb484e20e016538920e05487b8c52cbe92c86e62ce30fe85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ksp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zhk49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:16Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.187901 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-t8jfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clqcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clqcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-t8jfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:16Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.205376 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c423d6b-08b2-46d8-886e-3e7daea27bfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d719263f90620c27bbf86fa46cc1140fd71aa3376d2c569ad8d07ff3e806ec1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9312c3670640a2f0b63e269409ac820988ce1bac655c3a63f49c02fa88afd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e7c56ee2219f48c1296c4cb2e3cc2d390921a11d0faaf18ebcf1365dff431c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370e43e433fb3658a272ea20e7cb648bcbf3c309ff85f6e430e5773e15f5aa6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370e43e433fb3658a272ea20e7cb648bcbf3c309ff85f6e430e5773e15f5aa6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:16Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.226487 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc5a816-d9d0-41c0-877c-250e077ef445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 17:15:14.099362 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 17:15:14.102453 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2083573188/tls.crt::/tmp/serving-cert-2083573188/tls.key\\\\\\\"\\\\nI0202 17:15:19.591422 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 17:15:19.597363 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 17:15:19.597387 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 17:15:19.597410 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 17:15:19.597416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 17:15:19.647613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 17:15:19.647820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 17:15:19.647642 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 17:15:19.647909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 17:15:19.648000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 17:15:19.648027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 17:15:19.650075 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:16Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.244461 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580328d3eb1cbccbf31e6a18c6589dd3745f01a1451a9811a838f03a79ef444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:16Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.275064 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.275104 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.275113 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.275129 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.275139 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:16Z","lastTransitionTime":"2026-02-02T17:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.377870 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.377917 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.377930 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.377946 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.377958 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:16Z","lastTransitionTime":"2026-02-02T17:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.396414 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 08:01:34.068357217 +0000 UTC Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.399776 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.399823 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.399786 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:16:16 crc kubenswrapper[4858]: E0202 17:16:16.399932 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:16:16 crc kubenswrapper[4858]: E0202 17:16:16.400037 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:16:16 crc kubenswrapper[4858]: E0202 17:16:16.400102 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.400154 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:16:16 crc kubenswrapper[4858]: E0202 17:16:16.400196 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.481700 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.481737 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.481746 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.481762 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.481772 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:16Z","lastTransitionTime":"2026-02-02T17:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.584365 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.584438 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.584454 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.584491 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.584509 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:16Z","lastTransitionTime":"2026-02-02T17:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.688093 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.688156 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.688174 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.688198 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.688215 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:16Z","lastTransitionTime":"2026-02-02T17:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.791583 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.791649 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.791668 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.791696 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.791723 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:16Z","lastTransitionTime":"2026-02-02T17:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.885762 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wkm4w_ce405d19-c944-4a11-8195-bca9289b8d73/ovnkube-controller/3.log" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.886747 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wkm4w_ce405d19-c944-4a11-8195-bca9289b8d73/ovnkube-controller/2.log" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.890802 4858 generic.go:334] "Generic (PLEG): container finished" podID="ce405d19-c944-4a11-8195-bca9289b8d73" containerID="b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6" exitCode=1 Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.890899 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" event={"ID":"ce405d19-c944-4a11-8195-bca9289b8d73","Type":"ContainerDied","Data":"b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6"} Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.891232 4858 scope.go:117] "RemoveContainer" containerID="616f491a213328c1d78ab637e5e24b5438294b9a4d5b1db37e95f8b26cc2288d" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.891774 4858 scope.go:117] "RemoveContainer" containerID="b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6" Feb 02 17:16:16 crc kubenswrapper[4858]: E0202 17:16:16.892031 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wkm4w_openshift-ovn-kubernetes(ce405d19-c944-4a11-8195-bca9289b8d73)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.894367 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.894439 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.894465 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.894492 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.894517 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:16Z","lastTransitionTime":"2026-02-02T17:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.915222 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c423d6b-08b2-46d8-886e-3e7daea27bfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d719263f90620c27bbf86fa46cc1140fd71aa3376d2c569ad8d07ff3e806ec1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9312c3670640a2f0b63e269409ac820988ce1bac655c3a63f49c02fa88afd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e7c56ee2219f48c1296c4cb2e3cc2d390921a11d0faaf18ebcf1365dff431c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370e43e433fb3658a272ea20e7cb648bcbf3c309ff85f6e430e5773e15f5aa6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370e43e433fb3658a272ea20e7cb648bcbf3c309ff85f6e430e5773e15f5aa6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:16Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.938815 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce5b4fa8ae1fbd371b3a75bf6a2b061749d1b158fbf24b60a0a6fbebeee5e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:16Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.957846 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:16Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.973738 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhk49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3c55b8-1193-47e3-ae7a-3a4b06df2884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd7f6cd899675ca4ccd2da634b9c4865505db0ac1ca0f2ee8e2c65516fd2c190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ksp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c9a3c7ca03f3b32cb484e20e016538920e05487b8c52cbe92c86e62ce30fe85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ksp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zhk49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:16Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.988349 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-t8jfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clqcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clqcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-t8jfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:16Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.997214 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.997255 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.997267 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.997307 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:16 crc kubenswrapper[4858]: I0202 17:16:16.997322 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:16Z","lastTransitionTime":"2026-02-02T17:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.011619 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc5a816-d9d0-41c0-877c-250e077ef445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 17:15:14.099362 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 17:15:14.102453 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2083573188/tls.crt::/tmp/serving-cert-2083573188/tls.key\\\\\\\"\\\\nI0202 17:15:19.591422 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 17:15:19.597363 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 17:15:19.597387 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 17:15:19.597410 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 17:15:19.597416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 17:15:19.647613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 17:15:19.647820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 17:15:19.647642 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 17:15:19.647909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 17:15:19.648000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 17:15:19.648027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 17:15:19.650075 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:17Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.036017 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580328d3eb1cbccbf31e6a18c6589dd3745f01a1451a9811a838f03a79ef444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:17Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.052553 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52b7595-c2f6-4d7b-a076-db28e0332d49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d64509908882d8a14b1836f1c667e92c12f407f5108009c8a63dd841d034afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae002ad3b0e97ea0ba6f6c7a1bee332c5a1644dabbe1af5ccf1b9325356c77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe23a7d9f061976dbd948d705fa9d73bd8a5f9f01304c4d966fcc4501d84cac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:17Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.072099 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:17Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.089317 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:17Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.100007 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.100057 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.100072 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.100091 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.100144 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:17Z","lastTransitionTime":"2026-02-02T17:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.113248 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6803a1fe-30ea-4c74-8a7f-743c3344d829\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925eeba4bab6bf2efe66b7e9bcc09451b9dd26cb3dc457fe4432b134ed68dfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://845157900bb10cd29ec008f561118df2cbfcc30684394dbc201ad8a966aa8e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e87a8afe124154c232c26f9a43969189b865225c0162799857847f10af1257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16072ce1c24706c267a0341779f05e661bac827ae871ecc732f81923bb63bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c692114482fe72631826c6af5eb14c5ef0fdfc5500ed0ea3fd96d6009d5be562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:17Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.129336 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://586c9f180c983beb2187348f2e68b1a8e550805c0f8035b72785f9845b00a14c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0629fa3631c6ec611ce9a9130532c9890bab8c4f4ca83042f9d4c8f1ec990690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:17Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.142213 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b070dee1026575fc13174bc4542b19d26a65f78951e524e035b6ce8be3d21e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:17Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.153275 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6hxtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f96e711-13fa-4105-b042-45fe046d3d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca0363b669895519cf2438c9ea033dc0227b3f6fea175835586f9dc1252688c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlbtc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6hxtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:17Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.164535 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ct8b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76546fe8-0dad-45f2-aac1-2ec02ec40898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0dc2e96af3e8efe18dbe2ac824f5305bfbc9fe047645fd495f9a6ca568ae1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggp5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ct8b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:17Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.182484 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9szlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc7963e-1bdc-4038-805e-bd72fc217a13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485b3125d8343c8afd6e5d3b756e0a924c75da1d89c5e699ff825a8b46957bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59b9be9d88ddb7f0a4b7c29467c616216c8b14502941cc1d6e43a922d903567\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T17:16:09Z\\\",\\\"message\\\":\\\"2026-02-02T17:15:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b3dbb378-edad-457f-9c64-6bd6330f05ec\\\\n2026-02-02T17:15:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b3dbb378-edad-457f-9c64-6bd6330f05ec to /host/opt/cni/bin/\\\\n2026-02-02T17:15:24Z [verbose] multus-daemon started\\\\n2026-02-02T17:15:24Z [verbose] Readiness Indicator file check\\\\n2026-02-02T17:16:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:16:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9szlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:17Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.193176 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80911936318b3a9de14a9f71a7437f0ee6c9fdcb56c2f4efc568a5543a6c86e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3060eb52c9c081c561e2b97588ea0418eadc0e227307715ef71d48fb42bbf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lbvl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:17Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.201760 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.201793 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.201802 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.201818 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.201828 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:17Z","lastTransitionTime":"2026-02-02T17:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.211059 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce405d19-c944-4a11-8195-bca9289b8d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://616f491a213328c1d78ab637e5e24b5438294b9a4d5b1db37e95f8b26cc2288d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T17:15:47Z\\\",\\\"message\\\":\\\"2 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0202 17:15:47.451911 6522 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 17:15:47.451888 6522 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 17:15:47.452188 6522 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 17:15:47.452211 6522 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 17:15:47.452238 6522 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 17:15:47.452259 6522 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 17:15:47.452274 6522 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 17:15:47.453331 6522 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 17:15:47.453365 6522 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 17:15:47.453413 6522 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 17:15:47.453474 6522 factory.go:656] Stopping watch factory\\\\nI0202 17:15:47.453506 6522 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 17:15:47.453520 6522 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 17:15:47.453532 6522 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T17:16:16Z\\\",\\\"message\\\":\\\"hift.io/serving-cert-secret-name:dns-default-metrics-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{operator.openshift.io/v1 DNS default d8d88c7e-8c3e-49b6-8c5b-84aa454da2d7 0xc006ce8d37 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:dns,Protocol:UDP,Port:53,TargetPort:{1 0 dns},NodePort:0,AppProtocol:nil,},ServicePort{Name:dns-tcp,Protocol:TCP,Port:53,TargetPort:{1 0 dns-tcp},NodePort:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:9154,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{dns.operator.openshift.io/daemonset-dns: default,},ClusterIP:10.217.4.10,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.10],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0202 17:16:16.584589 6918 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:16:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wkm4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:17Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.305189 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.305283 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.305309 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.305339 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.305360 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:17Z","lastTransitionTime":"2026-02-02T17:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.363278 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.363330 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.363345 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.363365 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.363382 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:17Z","lastTransitionTime":"2026-02-02T17:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:17 crc kubenswrapper[4858]: E0202 17:16:17.380035 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b1513e64-30b8-48d2-874a-29d4cc9d3b3d\\\",\\\"systemUUID\\\":\\\"152e49e6-c863-4d14-b212-d4d9f0b62e1a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:17Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.384380 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.384431 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.384446 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.384466 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.384483 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:17Z","lastTransitionTime":"2026-02-02T17:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.396560 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 02:49:11.972884841 +0000 UTC Feb 02 17:16:17 crc kubenswrapper[4858]: E0202 17:16:17.398950 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b1513e64-30b8-48d2-874a-29d4cc9d3b3d\\\",\\\"systemUUID\\\":\\\"152e49e6-c863-4d14-b212-d4d9f0b62e1a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:17Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.409376 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.409429 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.409443 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.409462 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.409476 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:17Z","lastTransitionTime":"2026-02-02T17:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:17 crc kubenswrapper[4858]: E0202 17:16:17.425650 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b1513e64-30b8-48d2-874a-29d4cc9d3b3d\\\",\\\"systemUUID\\\":\\\"152e49e6-c863-4d14-b212-d4d9f0b62e1a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:17Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.429839 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.429907 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.429925 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.429948 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.429966 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:17Z","lastTransitionTime":"2026-02-02T17:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:17 crc kubenswrapper[4858]: E0202 17:16:17.448202 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b1513e64-30b8-48d2-874a-29d4cc9d3b3d\\\",\\\"systemUUID\\\":\\\"152e49e6-c863-4d14-b212-d4d9f0b62e1a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:17Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.452531 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.452590 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.452606 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.452629 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.452647 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:17Z","lastTransitionTime":"2026-02-02T17:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:17 crc kubenswrapper[4858]: E0202 17:16:17.469797 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b1513e64-30b8-48d2-874a-29d4cc9d3b3d\\\",\\\"systemUUID\\\":\\\"152e49e6-c863-4d14-b212-d4d9f0b62e1a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:17Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:17 crc kubenswrapper[4858]: E0202 17:16:17.470088 4858 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.472238 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.472282 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.472299 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.472319 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.472336 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:17Z","lastTransitionTime":"2026-02-02T17:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.575115 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.575710 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.576131 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.577156 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.577409 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:17Z","lastTransitionTime":"2026-02-02T17:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.681428 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.681802 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.682072 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.682315 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.682507 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:17Z","lastTransitionTime":"2026-02-02T17:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.785922 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.786212 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.786368 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.786492 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.786605 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:17Z","lastTransitionTime":"2026-02-02T17:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.890127 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.890221 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.890239 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.890264 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.890281 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:17Z","lastTransitionTime":"2026-02-02T17:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.896791 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wkm4w_ce405d19-c944-4a11-8195-bca9289b8d73/ovnkube-controller/3.log" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.901485 4858 scope.go:117] "RemoveContainer" containerID="b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6" Feb 02 17:16:17 crc kubenswrapper[4858]: E0202 17:16:17.902369 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wkm4w_openshift-ovn-kubernetes(ce405d19-c944-4a11-8195-bca9289b8d73)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.923501 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580328d3eb1cbccbf31e6a18c6589dd3745f01a1451a9811a838f03a79ef444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:17Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.939838 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc5a816-d9d0-41c0-877c-250e077ef445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 17:15:14.099362 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 17:15:14.102453 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2083573188/tls.crt::/tmp/serving-cert-2083573188/tls.key\\\\\\\"\\\\nI0202 17:15:19.591422 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 17:15:19.597363 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 17:15:19.597387 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 17:15:19.597410 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 17:15:19.597416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 17:15:19.647613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 17:15:19.647820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 17:15:19.647642 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 17:15:19.647909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 17:15:19.648000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 17:15:19.648027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 17:15:19.650075 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:17Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.959172 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:17Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.977124 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52b7595-c2f6-4d7b-a076-db28e0332d49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d64509908882d8a14b1836f1c667e92c12f407f5108009c8a63dd841d034afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae002ad3b0e97ea0ba6f6c7a1bee332c5a1644dabbe1af5ccf1b9325356c77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe23a7d9f061976dbd948d705fa9d73bd8a5f9f01304c4d966fcc4501d84cac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:17Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.990904 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:17Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.993192 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.993255 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.993278 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.993309 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:17 crc kubenswrapper[4858]: I0202 17:16:17.993331 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:17Z","lastTransitionTime":"2026-02-02T17:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.007619 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b070dee1026575fc13174bc4542b19d26a65f78951e524e035b6ce8be3d21e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:18Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.021101 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6hxtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f96e711-13fa-4105-b042-45fe046d3d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca0363b669895519cf2438c9ea033dc0227b3f6fea175835586f9dc1252688c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlbtc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6hxtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:18Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.035199 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ct8b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76546fe8-0dad-45f2-aac1-2ec02ec40898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0dc2e96af3e8efe18dbe2ac824f5305bfbc9fe047645fd495f9a6ca568ae1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggp5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ct8b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:18Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.054239 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9szlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc7963e-1bdc-4038-805e-bd72fc217a13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485b3125d8343c8afd6e5d3b756e0a924c75da1d89c5e699ff825a8b46957bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59b9be9d88ddb7f0a4b7c29467c616216c8b14502941cc1d6e43a922d903567\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T17:16:09Z\\\",\\\"message\\\":\\\"2026-02-02T17:15:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b3dbb378-edad-457f-9c64-6bd6330f05ec\\\\n2026-02-02T17:15:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b3dbb378-edad-457f-9c64-6bd6330f05ec to /host/opt/cni/bin/\\\\n2026-02-02T17:15:24Z [verbose] multus-daemon started\\\\n2026-02-02T17:15:24Z [verbose] Readiness Indicator file check\\\\n2026-02-02T17:16:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:16:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9szlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:18Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.071952 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80911936318b3a9de14a9f71a7437f0ee6c9fdcb56c2f4efc568a5543a6c86e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3060eb52c9c081c561e2b97588ea0418eadc0e227307715ef71d48fb42bbf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lbvl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:18Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.096448 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.096504 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.096515 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.096532 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.096543 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:18Z","lastTransitionTime":"2026-02-02T17:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.103581 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce405d19-c944-4a11-8195-bca9289b8d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T17:16:16Z\\\",\\\"message\\\":\\\"hift.io/serving-cert-secret-name:dns-default-metrics-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{operator.openshift.io/v1 DNS default d8d88c7e-8c3e-49b6-8c5b-84aa454da2d7 0xc006ce8d37 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:dns,Protocol:UDP,Port:53,TargetPort:{1 0 dns},NodePort:0,AppProtocol:nil,},ServicePort{Name:dns-tcp,Protocol:TCP,Port:53,TargetPort:{1 0 dns-tcp},NodePort:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:9154,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{dns.operator.openshift.io/daemonset-dns: default,},ClusterIP:10.217.4.10,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.10],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0202 17:16:16.584589 6918 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:16:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wkm4w_openshift-ovn-kubernetes(ce405d19-c944-4a11-8195-bca9289b8d73)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wkm4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:18Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.128582 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6803a1fe-30ea-4c74-8a7f-743c3344d829\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925eeba4bab6bf2efe66b7e9bcc09451b9dd26cb3dc457fe4432b134ed68dfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://845157900bb10cd29ec008f561118df2cbfcc30684394dbc201ad8a966aa8e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e87a8afe124154c232c26f9a43969189b865225c0162799857847f10af1257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16072ce1c24706c267a0341779f05e661bac827ae871ecc732f81923bb63bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c692114482fe72631826c6af5eb14c5ef0fdfc5500ed0ea3fd96d6009d5be562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:18Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.149210 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://586c9f180c983beb2187348f2e68b1a8e550805c0f8035b72785f9845b00a14c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0629fa3631c6ec611ce9a9130532c9890bab8c4f4ca83042f9d4c8f1ec990690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:18Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.166155 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:18Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.178863 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhk49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3c55b8-1193-47e3-ae7a-3a4b06df2884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd7f6cd899675ca4ccd2da634b9c4865505db0ac1ca0f2ee8e2c65516fd2c190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ksp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c9a3c7ca03f3b32cb484e20e016538920e05487b8c52cbe92c86e62ce30fe85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ksp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zhk49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:18Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.190591 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-t8jfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clqcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clqcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-t8jfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:18Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.199553 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.199640 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.199656 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.199677 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.199697 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:18Z","lastTransitionTime":"2026-02-02T17:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.203054 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c423d6b-08b2-46d8-886e-3e7daea27bfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d719263f90620c27bbf86fa46cc1140fd71aa3376d2c569ad8d07ff3e806ec1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9312c3670640a2f0b63e269409ac820988ce1bac655c3a63f49c02fa88afd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e7c56ee2219f48c1296c4cb2e3cc2d390921a11d0faaf18ebcf1365dff431c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370e43e433fb3658a272ea20e7cb648bcbf3c309ff85f6e430e5773e15f5aa6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370e43e433fb3658a272ea20e7cb648bcbf3c309ff85f6e430e5773e15f5aa6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:18Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.217509 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce5b4fa8ae1fbd371b3a75bf6a2b061749d1b158fbf24b60a0a6fbebeee5e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:18Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.301859 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.301892 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.301900 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.301913 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.301922 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:18Z","lastTransitionTime":"2026-02-02T17:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.396881 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 04:05:02.795556413 +0000 UTC Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.400170 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.400272 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:16:18 crc kubenswrapper[4858]: E0202 17:16:18.400372 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.400452 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.400607 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:16:18 crc kubenswrapper[4858]: E0202 17:16:18.400736 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:16:18 crc kubenswrapper[4858]: E0202 17:16:18.400621 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:16:18 crc kubenswrapper[4858]: E0202 17:16:18.401066 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.403643 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.403676 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.403686 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.403700 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.403710 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:18Z","lastTransitionTime":"2026-02-02T17:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.506521 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.506571 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.506587 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.506611 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.506629 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:18Z","lastTransitionTime":"2026-02-02T17:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.609136 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.609191 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.609207 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.609230 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.609247 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:18Z","lastTransitionTime":"2026-02-02T17:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.711941 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.712004 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.712016 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.712034 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.712047 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:18Z","lastTransitionTime":"2026-02-02T17:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.814650 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.814699 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.814715 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.814740 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.814757 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:18Z","lastTransitionTime":"2026-02-02T17:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.917290 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.917332 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.917344 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.917361 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:18 crc kubenswrapper[4858]: I0202 17:16:18.917373 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:18Z","lastTransitionTime":"2026-02-02T17:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.020546 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.020614 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.020639 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.020667 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.020690 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:19Z","lastTransitionTime":"2026-02-02T17:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.124453 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.124546 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.124568 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.124594 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.124616 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:19Z","lastTransitionTime":"2026-02-02T17:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.227549 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.227618 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.227641 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.227669 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.227687 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:19Z","lastTransitionTime":"2026-02-02T17:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.331182 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.331594 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.331741 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.331892 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.332059 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:19Z","lastTransitionTime":"2026-02-02T17:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.397024 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 03:48:59.039944155 +0000 UTC Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.434630 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.434687 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.434705 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.434729 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.434746 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:19Z","lastTransitionTime":"2026-02-02T17:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.537678 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.537711 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.537719 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.537734 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.537743 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:19Z","lastTransitionTime":"2026-02-02T17:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.640426 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.640746 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.640815 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.640886 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.640953 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:19Z","lastTransitionTime":"2026-02-02T17:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.743760 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.743822 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.743839 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.743865 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.743881 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:19Z","lastTransitionTime":"2026-02-02T17:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.847370 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.847443 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.847456 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.847472 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.847509 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:19Z","lastTransitionTime":"2026-02-02T17:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.951039 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.951323 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.951429 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.951545 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:19 crc kubenswrapper[4858]: I0202 17:16:19.951655 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:19Z","lastTransitionTime":"2026-02-02T17:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.054441 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.054471 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.054480 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.054492 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.054500 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:20Z","lastTransitionTime":"2026-02-02T17:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.156582 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.156614 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.156623 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.156637 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.156648 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:20Z","lastTransitionTime":"2026-02-02T17:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.258485 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.258780 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.258845 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.258915 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.259008 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:20Z","lastTransitionTime":"2026-02-02T17:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.361213 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.361586 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.361946 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.362212 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.362398 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:20Z","lastTransitionTime":"2026-02-02T17:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.397216 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 01:41:42.700598566 +0000 UTC Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.400793 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.400800 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:16:20 crc kubenswrapper[4858]: E0202 17:16:20.401035 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.401226 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.401409 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:16:20 crc kubenswrapper[4858]: E0202 17:16:20.401249 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:16:20 crc kubenswrapper[4858]: E0202 17:16:20.401625 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:16:20 crc kubenswrapper[4858]: E0202 17:16:20.401603 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.425477 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc5a816-d9d0-41c0-877c-250e077ef445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 17:15:14.099362 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 17:15:14.102453 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2083573188/tls.crt::/tmp/serving-cert-2083573188/tls.key\\\\\\\"\\\\nI0202 17:15:19.591422 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 17:15:19.597363 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 17:15:19.597387 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 17:15:19.597410 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 17:15:19.597416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 17:15:19.647613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 17:15:19.647820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 17:15:19.647642 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 17:15:19.647909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 17:15:19.648000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 17:15:19.648027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 17:15:19.650075 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:20Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.444737 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580328d3eb1cbccbf31e6a18c6589dd3745f01a1451a9811a838f03a79ef444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:20Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.460166 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52b7595-c2f6-4d7b-a076-db28e0332d49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d64509908882d8a14b1836f1c667e92c12f407f5108009c8a63dd841d034afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae002ad3b0e97ea0ba6f6c7a1bee332c5a1644dabbe1af5ccf1b9325356c77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe23a7d9f061976dbd948d705fa9d73bd8a5f9f01304c4d966fcc4501d84cac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:20Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.464953 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.465054 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.465081 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.465111 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.465134 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:20Z","lastTransitionTime":"2026-02-02T17:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.478756 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:20Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.499951 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:20Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.514478 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ct8b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76546fe8-0dad-45f2-aac1-2ec02ec40898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0dc2e96af3e8efe18dbe2ac824f5305bfbc9fe047645fd495f9a6ca568ae1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggp5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ct8b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:20Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.529504 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9szlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc7963e-1bdc-4038-805e-bd72fc217a13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485b3125d8343c8afd6e5d3b756e0a924c75da1d89c5e699ff825a8b46957bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59b9be9d88ddb7f0a4b7c29467c616216c8b14502941cc1d6e43a922d903567\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T17:16:09Z\\\",\\\"message\\\":\\\"2026-02-02T17:15:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b3dbb378-edad-457f-9c64-6bd6330f05ec\\\\n2026-02-02T17:15:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b3dbb378-edad-457f-9c64-6bd6330f05ec to /host/opt/cni/bin/\\\\n2026-02-02T17:15:24Z [verbose] multus-daemon started\\\\n2026-02-02T17:15:24Z [verbose] Readiness Indicator file check\\\\n2026-02-02T17:16:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:16:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9szlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:20Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.544057 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80911936318b3a9de14a9f71a7437f0ee6c9fdcb56c2f4efc568a5543a6c86e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3060eb52c9c081c561e2b97588ea0418eadc0e227307715ef71d48fb42bbf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lbvl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:20Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.568031 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.568153 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.568197 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.568216 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.568228 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:20Z","lastTransitionTime":"2026-02-02T17:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.571100 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce405d19-c944-4a11-8195-bca9289b8d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T17:16:16Z\\\",\\\"message\\\":\\\"hift.io/serving-cert-secret-name:dns-default-metrics-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{operator.openshift.io/v1 DNS default d8d88c7e-8c3e-49b6-8c5b-84aa454da2d7 0xc006ce8d37 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:dns,Protocol:UDP,Port:53,TargetPort:{1 0 dns},NodePort:0,AppProtocol:nil,},ServicePort{Name:dns-tcp,Protocol:TCP,Port:53,TargetPort:{1 0 dns-tcp},NodePort:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:9154,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{dns.operator.openshift.io/daemonset-dns: default,},ClusterIP:10.217.4.10,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.10],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0202 17:16:16.584589 6918 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:16:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wkm4w_openshift-ovn-kubernetes(ce405d19-c944-4a11-8195-bca9289b8d73)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wkm4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:20Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.602395 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6803a1fe-30ea-4c74-8a7f-743c3344d829\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925eeba4bab6bf2efe66b7e9bcc09451b9dd26cb3dc457fe4432b134ed68dfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://845157900bb10cd29ec008f561118df2cbfcc30684394dbc201ad8a966aa8e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e87a8afe124154c232c26f9a43969189b865225c0162799857847f10af1257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16072ce1c24706c267a0341779f05e661bac827ae871ecc732f81923bb63bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c692114482fe72631826c6af5eb14c5ef0fdfc5500ed0ea3fd96d6009d5be562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:20Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.615561 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://586c9f180c983beb2187348f2e68b1a8e550805c0f8035b72785f9845b00a14c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0629fa3631c6ec611ce9a9130532c9890bab8c4f4ca83042f9d4c8f1ec990690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:20Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.628208 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b070dee1026575fc13174bc4542b19d26a65f78951e524e035b6ce8be3d21e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:20Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.641130 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6hxtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f96e711-13fa-4105-b042-45fe046d3d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca0363b669895519cf2438c9ea033dc0227b3f6fea175835586f9dc1252688c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlbtc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6hxtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:20Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.656128 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-t8jfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clqcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clqcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-t8jfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:20Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.671044 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.671103 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.671116 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.671138 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.671153 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:20Z","lastTransitionTime":"2026-02-02T17:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.674753 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c423d6b-08b2-46d8-886e-3e7daea27bfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d719263f90620c27bbf86fa46cc1140fd71aa3376d2c569ad8d07ff3e806ec1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9312c3670640a2f0b63e269409ac820988ce1bac655c3a63f49c02fa88afd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e7c56ee2219f48c1296c4cb2e3cc2d390921a11d0faaf18ebcf1365dff431c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370e43e433fb3658a272ea20e7cb648bcbf3c309ff85f6e430e5773e15f5aa6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370e43e433fb3658a272ea20e7cb648bcbf3c309ff85f6e430e5773e15f5aa6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:20Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.692299 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce5b4fa8ae1fbd371b3a75bf6a2b061749d1b158fbf24b60a0a6fbebeee5e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:20Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.706896 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:20Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.717948 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhk49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3c55b8-1193-47e3-ae7a-3a4b06df2884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd7f6cd899675ca4ccd2da634b9c4865505db0ac1ca0f2ee8e2c65516fd2c190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ksp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c9a3c7ca03f3b32cb484e20e016538920e05487b8c52cbe92c86e62ce30fe85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ksp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zhk49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:20Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.774365 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.774413 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.774444 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.774463 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.774474 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:20Z","lastTransitionTime":"2026-02-02T17:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.877278 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.877337 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.877355 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.877382 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.877400 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:20Z","lastTransitionTime":"2026-02-02T17:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.981467 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.982132 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.982155 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.982184 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:20 crc kubenswrapper[4858]: I0202 17:16:20.982205 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:20Z","lastTransitionTime":"2026-02-02T17:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.085320 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.085362 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.085374 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.085403 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.085414 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:21Z","lastTransitionTime":"2026-02-02T17:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.188698 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.188786 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.188815 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.188846 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.188872 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:21Z","lastTransitionTime":"2026-02-02T17:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.291116 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.291380 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.291414 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.291442 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.291460 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:21Z","lastTransitionTime":"2026-02-02T17:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.396000 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.396066 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.396088 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.396114 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.396133 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:21Z","lastTransitionTime":"2026-02-02T17:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.398039 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 11:56:40.337669453 +0000 UTC Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.499342 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.499409 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.499426 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.499451 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.499467 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:21Z","lastTransitionTime":"2026-02-02T17:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.602479 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.602542 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.602565 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.602594 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.602617 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:21Z","lastTransitionTime":"2026-02-02T17:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.714368 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.714429 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.714454 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.714486 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.714508 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:21Z","lastTransitionTime":"2026-02-02T17:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.817461 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.817526 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.817546 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.817571 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.817591 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:21Z","lastTransitionTime":"2026-02-02T17:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.922225 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.922307 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.922330 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.922357 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:21 crc kubenswrapper[4858]: I0202 17:16:21.922383 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:21Z","lastTransitionTime":"2026-02-02T17:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.025490 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.025539 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.025550 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.025569 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.025588 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:22Z","lastTransitionTime":"2026-02-02T17:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.128815 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.128878 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.128897 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.128924 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.128943 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:22Z","lastTransitionTime":"2026-02-02T17:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.232481 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.232552 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.232576 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.232609 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.232633 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:22Z","lastTransitionTime":"2026-02-02T17:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.335562 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.335611 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.335627 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.335648 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.335661 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:22Z","lastTransitionTime":"2026-02-02T17:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.398702 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 08:31:53.536704719 +0000 UTC Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.399903 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:16:22 crc kubenswrapper[4858]: E0202 17:16:22.400100 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.400391 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:16:22 crc kubenswrapper[4858]: E0202 17:16:22.400490 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.400696 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:16:22 crc kubenswrapper[4858]: E0202 17:16:22.400799 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.401047 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:16:22 crc kubenswrapper[4858]: E0202 17:16:22.401183 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.438359 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.438562 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.438758 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.438926 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.439114 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:22Z","lastTransitionTime":"2026-02-02T17:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.541931 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.542031 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.542056 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.542084 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.542103 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:22Z","lastTransitionTime":"2026-02-02T17:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.645314 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.645369 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.645385 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.645408 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.645424 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:22Z","lastTransitionTime":"2026-02-02T17:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.748771 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.748826 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.748844 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.748868 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.748886 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:22Z","lastTransitionTime":"2026-02-02T17:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.852099 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.852168 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.852202 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.852229 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.852249 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:22Z","lastTransitionTime":"2026-02-02T17:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.955317 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.955372 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.955437 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.955470 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:22 crc kubenswrapper[4858]: I0202 17:16:22.955489 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:22Z","lastTransitionTime":"2026-02-02T17:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.058670 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.058727 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.058743 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.058766 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.058784 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:23Z","lastTransitionTime":"2026-02-02T17:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.161754 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.161826 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.161848 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.161878 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.161899 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:23Z","lastTransitionTime":"2026-02-02T17:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.264824 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.264894 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.264913 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.264940 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.264957 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:23Z","lastTransitionTime":"2026-02-02T17:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.368135 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.368198 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.368217 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.368243 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.368262 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:23Z","lastTransitionTime":"2026-02-02T17:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.400401 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 12:11:20.806435535 +0000 UTC Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.471272 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.471347 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.471371 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.471398 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.471417 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:23Z","lastTransitionTime":"2026-02-02T17:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.574900 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.574945 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.574957 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.575026 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.575054 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:23Z","lastTransitionTime":"2026-02-02T17:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.679120 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.679186 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.679205 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.679228 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.679245 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:23Z","lastTransitionTime":"2026-02-02T17:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.781629 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.781688 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.781697 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.781710 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.781720 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:23Z","lastTransitionTime":"2026-02-02T17:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.885144 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.885229 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.885260 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.885290 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.885310 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:23Z","lastTransitionTime":"2026-02-02T17:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.988757 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.988799 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.988810 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.988826 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:23 crc kubenswrapper[4858]: I0202 17:16:23.988839 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:23Z","lastTransitionTime":"2026-02-02T17:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.091296 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.091341 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.091353 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.091370 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.091382 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:24Z","lastTransitionTime":"2026-02-02T17:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.194760 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.194828 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.194849 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.194878 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.194903 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:24Z","lastTransitionTime":"2026-02-02T17:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.297175 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.297246 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.297269 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.297302 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.297328 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:24Z","lastTransitionTime":"2026-02-02T17:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.317692 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.317825 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.317884 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:16:24 crc kubenswrapper[4858]: E0202 17:16:24.318015 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:28.317936048 +0000 UTC m=+149.470351373 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:16:24 crc kubenswrapper[4858]: E0202 17:16:24.318025 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.318069 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:16:24 crc kubenswrapper[4858]: E0202 17:16:24.318095 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 17:16:24 crc kubenswrapper[4858]: E0202 17:16:24.318108 4858 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 17:16:24 crc kubenswrapper[4858]: E0202 17:16:24.318129 4858 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 17:16:24 crc kubenswrapper[4858]: E0202 17:16:24.318118 4858 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 17:16:24 crc kubenswrapper[4858]: E0202 17:16:24.318177 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 17:17:28.318161004 +0000 UTC m=+149.470576359 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 17:16:24 crc kubenswrapper[4858]: E0202 17:16:24.318203 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 17:17:28.318188365 +0000 UTC m=+149.470603630 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 17:16:24 crc kubenswrapper[4858]: E0202 17:16:24.318220 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 17:17:28.318213066 +0000 UTC m=+149.470628461 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.400374 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.400477 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.400384 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.400594 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.400620 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:24 crc kubenswrapper[4858]: E0202 17:16:24.400607 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.400630 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 02:59:04.959241595 +0000 UTC Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.400649 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.400740 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:24Z","lastTransitionTime":"2026-02-02T17:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:24 crc kubenswrapper[4858]: E0202 17:16:24.400762 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.400899 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.400595 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:16:24 crc kubenswrapper[4858]: E0202 17:16:24.401055 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:16:24 crc kubenswrapper[4858]: E0202 17:16:24.401148 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.418503 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:16:24 crc kubenswrapper[4858]: E0202 17:16:24.418609 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 17:16:24 crc kubenswrapper[4858]: E0202 17:16:24.418621 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 17:16:24 crc kubenswrapper[4858]: E0202 17:16:24.418631 4858 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 17:16:24 crc kubenswrapper[4858]: E0202 17:16:24.418667 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 17:17:28.418653435 +0000 UTC m=+149.571068700 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.503750 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.503786 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.503797 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.503813 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.503826 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:24Z","lastTransitionTime":"2026-02-02T17:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.607357 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.607411 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.607445 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.607475 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.607495 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:24Z","lastTransitionTime":"2026-02-02T17:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.711177 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.711250 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.711272 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.711300 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.711322 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:24Z","lastTransitionTime":"2026-02-02T17:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.817321 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.817388 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.817408 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.817437 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.817455 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:24Z","lastTransitionTime":"2026-02-02T17:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.921479 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.921529 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.921545 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.921567 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:24 crc kubenswrapper[4858]: I0202 17:16:24.921584 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:24Z","lastTransitionTime":"2026-02-02T17:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.024851 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.024915 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.024932 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.024955 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.025005 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:25Z","lastTransitionTime":"2026-02-02T17:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.127844 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.127935 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.127951 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.128005 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.128025 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:25Z","lastTransitionTime":"2026-02-02T17:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.230702 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.230758 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.230776 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.230796 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.230810 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:25Z","lastTransitionTime":"2026-02-02T17:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.333816 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.333913 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.333932 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.333959 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.334010 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:25Z","lastTransitionTime":"2026-02-02T17:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.401620 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 07:04:27.393889881 +0000 UTC Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.437171 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.437254 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.437285 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.437315 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.437343 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:25Z","lastTransitionTime":"2026-02-02T17:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.539958 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.540046 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.540057 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.540073 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.540084 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:25Z","lastTransitionTime":"2026-02-02T17:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.642651 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.642711 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.642727 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.642749 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.642766 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:25Z","lastTransitionTime":"2026-02-02T17:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.745156 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.745281 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.745291 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.745304 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.745312 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:25Z","lastTransitionTime":"2026-02-02T17:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.847948 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.848045 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.848063 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.848085 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.848101 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:25Z","lastTransitionTime":"2026-02-02T17:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.950946 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.951028 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.951045 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.951072 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:25 crc kubenswrapper[4858]: I0202 17:16:25.951092 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:25Z","lastTransitionTime":"2026-02-02T17:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.054121 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.054166 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.054176 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.054191 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.054200 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:26Z","lastTransitionTime":"2026-02-02T17:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.157174 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.157227 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.157244 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.157269 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.157290 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:26Z","lastTransitionTime":"2026-02-02T17:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.259754 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.259826 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.259845 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.259869 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.259886 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:26Z","lastTransitionTime":"2026-02-02T17:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.363832 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.363960 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.364032 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.364064 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.364087 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:26Z","lastTransitionTime":"2026-02-02T17:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.400327 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.400433 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:16:26 crc kubenswrapper[4858]: E0202 17:16:26.400471 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.400559 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.400603 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:16:26 crc kubenswrapper[4858]: E0202 17:16:26.400733 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:16:26 crc kubenswrapper[4858]: E0202 17:16:26.400785 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:16:26 crc kubenswrapper[4858]: E0202 17:16:26.400942 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.402723 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 21:03:48.701937159 +0000 UTC Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.467738 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.467807 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.467825 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.467891 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.467910 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:26Z","lastTransitionTime":"2026-02-02T17:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.570134 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.570184 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.570200 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.570222 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.570237 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:26Z","lastTransitionTime":"2026-02-02T17:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.673301 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.673349 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.673367 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.673390 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.673407 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:26Z","lastTransitionTime":"2026-02-02T17:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.775809 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.775887 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.775904 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.775930 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.775947 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:26Z","lastTransitionTime":"2026-02-02T17:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.878622 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.878709 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.878733 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.878761 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.878782 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:26Z","lastTransitionTime":"2026-02-02T17:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.981055 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.981123 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.981139 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.981161 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:26 crc kubenswrapper[4858]: I0202 17:16:26.981179 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:26Z","lastTransitionTime":"2026-02-02T17:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.084645 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.084717 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.084739 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.084772 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.084793 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:27Z","lastTransitionTime":"2026-02-02T17:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.188166 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.188242 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.188265 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.188292 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.188312 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:27Z","lastTransitionTime":"2026-02-02T17:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.290997 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.291048 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.291098 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.291173 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.291218 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:27Z","lastTransitionTime":"2026-02-02T17:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.393304 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.393356 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.393372 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.393394 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.393411 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:27Z","lastTransitionTime":"2026-02-02T17:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.403479 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 13:17:58.089950972 +0000 UTC Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.496724 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.496792 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.496809 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.496832 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.496849 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:27Z","lastTransitionTime":"2026-02-02T17:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.599841 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.599900 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.599915 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.599936 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.599952 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:27Z","lastTransitionTime":"2026-02-02T17:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.703140 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.703191 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.703207 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.703232 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.703249 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:27Z","lastTransitionTime":"2026-02-02T17:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.707912 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.707969 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.708015 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.708039 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.708056 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:27Z","lastTransitionTime":"2026-02-02T17:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:27 crc kubenswrapper[4858]: E0202 17:16:27.729124 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b1513e64-30b8-48d2-874a-29d4cc9d3b3d\\\",\\\"systemUUID\\\":\\\"152e49e6-c863-4d14-b212-d4d9f0b62e1a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:27Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.738958 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.739043 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.739061 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.739088 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.739105 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:27Z","lastTransitionTime":"2026-02-02T17:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:27 crc kubenswrapper[4858]: E0202 17:16:27.783719 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b1513e64-30b8-48d2-874a-29d4cc9d3b3d\\\",\\\"systemUUID\\\":\\\"152e49e6-c863-4d14-b212-d4d9f0b62e1a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:27Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.791278 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.791329 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.791344 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.791368 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.791384 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:27Z","lastTransitionTime":"2026-02-02T17:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:27 crc kubenswrapper[4858]: E0202 17:16:27.810472 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b1513e64-30b8-48d2-874a-29d4cc9d3b3d\\\",\\\"systemUUID\\\":\\\"152e49e6-c863-4d14-b212-d4d9f0b62e1a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:27Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.813928 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.813957 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.813967 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.814014 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.814023 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:27Z","lastTransitionTime":"2026-02-02T17:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:27 crc kubenswrapper[4858]: E0202 17:16:27.830470 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b1513e64-30b8-48d2-874a-29d4cc9d3b3d\\\",\\\"systemUUID\\\":\\\"152e49e6-c863-4d14-b212-d4d9f0b62e1a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:27Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.833541 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.833577 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.833586 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.833600 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.833609 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:27Z","lastTransitionTime":"2026-02-02T17:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:27 crc kubenswrapper[4858]: E0202 17:16:27.846464 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b1513e64-30b8-48d2-874a-29d4cc9d3b3d\\\",\\\"systemUUID\\\":\\\"152e49e6-c863-4d14-b212-d4d9f0b62e1a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:27Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:27 crc kubenswrapper[4858]: E0202 17:16:27.846616 4858 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.847931 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.848032 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.848049 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.848067 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.848079 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:27Z","lastTransitionTime":"2026-02-02T17:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.950771 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.950833 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.950853 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.950879 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:27 crc kubenswrapper[4858]: I0202 17:16:27.950901 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:27Z","lastTransitionTime":"2026-02-02T17:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.054234 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.054276 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.054285 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.054303 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.054313 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:28Z","lastTransitionTime":"2026-02-02T17:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.157532 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.157612 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.157637 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.157669 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.157690 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:28Z","lastTransitionTime":"2026-02-02T17:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.260164 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.260233 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.260255 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.260283 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.260303 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:28Z","lastTransitionTime":"2026-02-02T17:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.363731 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.363794 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.363815 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.363845 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.363961 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:28Z","lastTransitionTime":"2026-02-02T17:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.400276 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.400370 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.400375 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.400442 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:16:28 crc kubenswrapper[4858]: E0202 17:16:28.400590 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:16:28 crc kubenswrapper[4858]: E0202 17:16:28.400719 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:16:28 crc kubenswrapper[4858]: E0202 17:16:28.400854 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:16:28 crc kubenswrapper[4858]: E0202 17:16:28.400950 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.404284 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 14:27:07.402728675 +0000 UTC Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.466498 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.466570 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.466593 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.466625 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.466644 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:28Z","lastTransitionTime":"2026-02-02T17:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.570189 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.570257 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.570275 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.570298 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.570315 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:28Z","lastTransitionTime":"2026-02-02T17:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.673430 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.673492 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.673514 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.673558 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.673583 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:28Z","lastTransitionTime":"2026-02-02T17:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.776250 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.776359 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.776382 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.776406 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.776424 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:28Z","lastTransitionTime":"2026-02-02T17:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.879441 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.879504 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.879522 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.879546 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.879563 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:28Z","lastTransitionTime":"2026-02-02T17:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.982673 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.982885 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.982915 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.983038 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:28 crc kubenswrapper[4858]: I0202 17:16:28.983125 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:28Z","lastTransitionTime":"2026-02-02T17:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.086175 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.086239 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.086260 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.086293 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.086316 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:29Z","lastTransitionTime":"2026-02-02T17:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.189549 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.189608 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.189627 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.189650 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.189666 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:29Z","lastTransitionTime":"2026-02-02T17:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.292830 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.292893 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.292909 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.292932 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.292949 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:29Z","lastTransitionTime":"2026-02-02T17:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.396370 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.396431 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.396447 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.396470 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.396486 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:29Z","lastTransitionTime":"2026-02-02T17:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.404743 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 13:42:44.308117485 +0000 UTC Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.500083 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.500143 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.500160 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.500182 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.500199 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:29Z","lastTransitionTime":"2026-02-02T17:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.603074 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.603150 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.603172 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.603196 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.603215 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:29Z","lastTransitionTime":"2026-02-02T17:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.705681 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.705743 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.705761 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.705787 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.705806 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:29Z","lastTransitionTime":"2026-02-02T17:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.807616 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.807648 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.807655 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.807669 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.807677 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:29Z","lastTransitionTime":"2026-02-02T17:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.910270 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.910382 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.910406 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.910437 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:29 crc kubenswrapper[4858]: I0202 17:16:29.910462 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:29Z","lastTransitionTime":"2026-02-02T17:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.013185 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.013234 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.013244 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.013259 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.013270 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:30Z","lastTransitionTime":"2026-02-02T17:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.116668 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.116756 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.116781 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.116812 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.116834 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:30Z","lastTransitionTime":"2026-02-02T17:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.220094 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.220164 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.220183 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.220207 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.220224 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:30Z","lastTransitionTime":"2026-02-02T17:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.323602 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.323655 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.323674 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.323696 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.323715 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:30Z","lastTransitionTime":"2026-02-02T17:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.399541 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.399618 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.399554 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.400045 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:16:30 crc kubenswrapper[4858]: E0202 17:16:30.400204 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:16:30 crc kubenswrapper[4858]: E0202 17:16:30.399703 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:16:30 crc kubenswrapper[4858]: E0202 17:16:30.400353 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:16:30 crc kubenswrapper[4858]: E0202 17:16:30.400455 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.404848 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 18:54:02.529828834 +0000 UTC Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.414913 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.417423 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhk49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3c55b8-1193-47e3-ae7a-3a4b06df2884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd7f6cd899675ca4ccd2da634b9c4865505db0ac1ca0f2ee8e2c65516fd2c190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ksp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c9a3c7ca03f3b32cb484e20e016538920e05487b8c52cbe92c86e62ce30fe85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ksp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zhk49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:30Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.426945 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.427009 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.427025 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.427047 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.427065 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:30Z","lastTransitionTime":"2026-02-02T17:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.431355 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-t8jfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clqcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clqcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-t8jfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:30Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.444647 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c423d6b-08b2-46d8-886e-3e7daea27bfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d719263f90620c27bbf86fa46cc1140fd71aa3376d2c569ad8d07ff3e806ec1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9312c3670640a2f0b63e269409ac820988ce1bac655c3a63f49c02fa88afd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e7c56ee2219f48c1296c4cb2e3cc2d390921a11d0faaf18ebcf1365dff431c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370e43e433fb3658a272ea20e7cb648bcbf3c309ff85f6e430e5773e15f5aa6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370e43e433fb3658a272ea20e7cb648bcbf3c309ff85f6e430e5773e15f5aa6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:30Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.457062 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce5b4fa8ae1fbd371b3a75bf6a2b061749d1b158fbf24b60a0a6fbebeee5e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:30Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.467674 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:30Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.479809 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc5a816-d9d0-41c0-877c-250e077ef445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 17:15:14.099362 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 17:15:14.102453 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2083573188/tls.crt::/tmp/serving-cert-2083573188/tls.key\\\\\\\"\\\\nI0202 17:15:19.591422 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 17:15:19.597363 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 17:15:19.597387 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 17:15:19.597410 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 17:15:19.597416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 17:15:19.647613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 17:15:19.647820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 17:15:19.647642 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 17:15:19.647909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 17:15:19.648000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 17:15:19.648027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 17:15:19.650075 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:30Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.499540 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580328d3eb1cbccbf31e6a18c6589dd3745f01a1451a9811a838f03a79ef444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:30Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.515771 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52b7595-c2f6-4d7b-a076-db28e0332d49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d64509908882d8a14b1836f1c667e92c12f407f5108009c8a63dd841d034afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae002ad3b0e97ea0ba6f6c7a1bee332c5a1644dabbe1af5ccf1b9325356c77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe23a7d9f061976dbd948d705fa9d73bd8a5f9f01304c4d966fcc4501d84cac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:30Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.529778 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:30Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.530137 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.530160 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.530320 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.530343 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.530353 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:30Z","lastTransitionTime":"2026-02-02T17:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.542928 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:30Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.553269 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6hxtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f96e711-13fa-4105-b042-45fe046d3d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca0363b669895519cf2438c9ea033dc0227b3f6fea175835586f9dc1252688c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlbtc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6hxtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:30Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.563441 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ct8b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76546fe8-0dad-45f2-aac1-2ec02ec40898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0dc2e96af3e8efe18dbe2ac824f5305bfbc9fe047645fd495f9a6ca568ae1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggp5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ct8b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:30Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.579001 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9szlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc7963e-1bdc-4038-805e-bd72fc217a13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485b3125d8343c8afd6e5d3b756e0a924c75da1d89c5e699ff825a8b46957bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59b9be9d88ddb7f0a4b7c29467c616216c8b14502941cc1d6e43a922d903567\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T17:16:09Z\\\",\\\"message\\\":\\\"2026-02-02T17:15:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b3dbb378-edad-457f-9c64-6bd6330f05ec\\\\n2026-02-02T17:15:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b3dbb378-edad-457f-9c64-6bd6330f05ec to /host/opt/cni/bin/\\\\n2026-02-02T17:15:24Z [verbose] multus-daemon started\\\\n2026-02-02T17:15:24Z [verbose] Readiness Indicator file check\\\\n2026-02-02T17:16:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:16:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9szlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:30Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.590318 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80911936318b3a9de14a9f71a7437f0ee6c9fdcb56c2f4efc568a5543a6c86e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3060eb52c9c081c561e2b97588ea0418eadc0e227307715ef71d48fb42bbf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lbvl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:30Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.607501 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce405d19-c944-4a11-8195-bca9289b8d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T17:16:16Z\\\",\\\"message\\\":\\\"hift.io/serving-cert-secret-name:dns-default-metrics-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{operator.openshift.io/v1 DNS default d8d88c7e-8c3e-49b6-8c5b-84aa454da2d7 0xc006ce8d37 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:dns,Protocol:UDP,Port:53,TargetPort:{1 0 dns},NodePort:0,AppProtocol:nil,},ServicePort{Name:dns-tcp,Protocol:TCP,Port:53,TargetPort:{1 0 dns-tcp},NodePort:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:9154,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{dns.operator.openshift.io/daemonset-dns: default,},ClusterIP:10.217.4.10,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.10],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0202 17:16:16.584589 6918 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:16:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wkm4w_openshift-ovn-kubernetes(ce405d19-c944-4a11-8195-bca9289b8d73)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wkm4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:30Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.632964 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.633079 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.633105 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.633137 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.633164 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:30Z","lastTransitionTime":"2026-02-02T17:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.634310 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6803a1fe-30ea-4c74-8a7f-743c3344d829\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925eeba4bab6bf2efe66b7e9bcc09451b9dd26cb3dc457fe4432b134ed68dfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://845157900bb10cd29ec008f561118df2cbfcc30684394dbc201ad8a966aa8e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e87a8afe124154c232c26f9a43969189b865225c0162799857847f10af1257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16072ce1c24706c267a0341779f05e661bac827ae871ecc732f81923bb63bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c692114482fe72631826c6af5eb14c5ef0fdfc5500ed0ea3fd96d6009d5be562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:30Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.649042 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://586c9f180c983beb2187348f2e68b1a8e550805c0f8035b72785f9845b00a14c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0629fa3631c6ec611ce9a9130532c9890bab8c4f4ca83042f9d4c8f1ec990690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:30Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.663427 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b070dee1026575fc13174bc4542b19d26a65f78951e524e035b6ce8be3d21e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:30Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.735628 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.735684 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.735702 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.735726 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.735746 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:30Z","lastTransitionTime":"2026-02-02T17:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.837716 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.837781 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.837797 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.837822 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.837840 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:30Z","lastTransitionTime":"2026-02-02T17:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.941235 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.941293 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.941311 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.941336 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:30 crc kubenswrapper[4858]: I0202 17:16:30.941354 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:30Z","lastTransitionTime":"2026-02-02T17:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.044409 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.044459 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.044476 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.044498 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.044514 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:31Z","lastTransitionTime":"2026-02-02T17:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.147724 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.147780 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.147799 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.147821 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.147839 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:31Z","lastTransitionTime":"2026-02-02T17:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.251756 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.251847 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.251865 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.251890 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.251906 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:31Z","lastTransitionTime":"2026-02-02T17:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.354321 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.354385 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.354406 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.354432 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.354480 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:31Z","lastTransitionTime":"2026-02-02T17:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.401377 4858 scope.go:117] "RemoveContainer" containerID="b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6" Feb 02 17:16:31 crc kubenswrapper[4858]: E0202 17:16:31.401739 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wkm4w_openshift-ovn-kubernetes(ce405d19-c944-4a11-8195-bca9289b8d73)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.405361 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 19:56:02.206894776 +0000 UTC Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.456334 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.456414 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.456440 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.456466 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.456483 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:31Z","lastTransitionTime":"2026-02-02T17:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.562594 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.562839 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.562850 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.562876 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.562887 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:31Z","lastTransitionTime":"2026-02-02T17:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.666188 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.666229 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.666238 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.666252 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.666263 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:31Z","lastTransitionTime":"2026-02-02T17:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.769233 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.769276 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.769291 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.769335 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.769351 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:31Z","lastTransitionTime":"2026-02-02T17:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.872083 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.872140 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.872157 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.872183 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.872201 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:31Z","lastTransitionTime":"2026-02-02T17:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.974592 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.974636 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.974647 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.974662 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:31 crc kubenswrapper[4858]: I0202 17:16:31.974673 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:31Z","lastTransitionTime":"2026-02-02T17:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.077037 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.077112 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.077130 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.077154 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.077172 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:32Z","lastTransitionTime":"2026-02-02T17:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.180603 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.180670 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.180694 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.180723 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.180744 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:32Z","lastTransitionTime":"2026-02-02T17:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.283612 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.283705 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.283728 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.283757 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.283775 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:32Z","lastTransitionTime":"2026-02-02T17:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.386520 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.386580 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.386597 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.386621 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.386637 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:32Z","lastTransitionTime":"2026-02-02T17:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.401197 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.401253 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:16:32 crc kubenswrapper[4858]: E0202 17:16:32.401398 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.401462 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.401467 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:16:32 crc kubenswrapper[4858]: E0202 17:16:32.401641 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:16:32 crc kubenswrapper[4858]: E0202 17:16:32.401888 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:16:32 crc kubenswrapper[4858]: E0202 17:16:32.402190 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.405575 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 04:10:44.21455488 +0000 UTC Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.490059 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.490110 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.490125 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.490144 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.490157 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:32Z","lastTransitionTime":"2026-02-02T17:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.593094 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.593153 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.593182 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.593205 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.593222 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:32Z","lastTransitionTime":"2026-02-02T17:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.696657 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.696710 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.696728 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.696752 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.696770 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:32Z","lastTransitionTime":"2026-02-02T17:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.800558 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.800633 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.800657 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.800687 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.800709 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:32Z","lastTransitionTime":"2026-02-02T17:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.903361 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.903412 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.903428 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.903451 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:32 crc kubenswrapper[4858]: I0202 17:16:32.903469 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:32Z","lastTransitionTime":"2026-02-02T17:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.006058 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.006118 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.006142 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.006173 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.006201 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:33Z","lastTransitionTime":"2026-02-02T17:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.108672 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.108710 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.108722 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.108737 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.108750 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:33Z","lastTransitionTime":"2026-02-02T17:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.211589 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.211646 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.211669 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.211700 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.211723 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:33Z","lastTransitionTime":"2026-02-02T17:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.314717 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.314791 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.314813 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.314844 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.314869 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:33Z","lastTransitionTime":"2026-02-02T17:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.406509 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 10:16:32.516156035 +0000 UTC Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.418192 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.418240 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.418256 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.418279 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.418296 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:33Z","lastTransitionTime":"2026-02-02T17:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.521426 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.521492 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.521511 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.521532 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.521546 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:33Z","lastTransitionTime":"2026-02-02T17:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.623551 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.623767 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.623876 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.623990 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.624076 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:33Z","lastTransitionTime":"2026-02-02T17:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.726852 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.726909 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.726928 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.726952 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.727012 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:33Z","lastTransitionTime":"2026-02-02T17:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.830403 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.830483 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.830505 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.830533 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.830554 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:33Z","lastTransitionTime":"2026-02-02T17:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.933631 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.933915 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.934115 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.934267 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:33 crc kubenswrapper[4858]: I0202 17:16:33.934385 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:33Z","lastTransitionTime":"2026-02-02T17:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.038222 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.038302 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.038320 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.038352 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.038376 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:34Z","lastTransitionTime":"2026-02-02T17:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.141157 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.141238 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.141262 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.141293 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.141316 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:34Z","lastTransitionTime":"2026-02-02T17:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.244856 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.244910 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.244927 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.244951 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.244968 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:34Z","lastTransitionTime":"2026-02-02T17:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.347414 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.347456 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.347468 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.347488 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.347501 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:34Z","lastTransitionTime":"2026-02-02T17:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.400452 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:16:34 crc kubenswrapper[4858]: E0202 17:16:34.400818 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.400528 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:16:34 crc kubenswrapper[4858]: E0202 17:16:34.401110 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.400476 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:16:34 crc kubenswrapper[4858]: E0202 17:16:34.401362 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.400554 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:16:34 crc kubenswrapper[4858]: E0202 17:16:34.401577 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.407469 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 00:16:33.600411419 +0000 UTC Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.450622 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.450666 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.450678 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.450695 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.450706 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:34Z","lastTransitionTime":"2026-02-02T17:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.554501 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.554565 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.554580 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.554610 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.554640 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:34Z","lastTransitionTime":"2026-02-02T17:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.658560 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.658620 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.658637 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.658663 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.658683 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:34Z","lastTransitionTime":"2026-02-02T17:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.762087 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.762478 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.762654 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.762801 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.763063 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:34Z","lastTransitionTime":"2026-02-02T17:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.866254 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.866316 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.866326 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.866348 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.866362 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:34Z","lastTransitionTime":"2026-02-02T17:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.968964 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.969052 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.969072 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.969094 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:34 crc kubenswrapper[4858]: I0202 17:16:34.969111 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:34Z","lastTransitionTime":"2026-02-02T17:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.071515 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.071579 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.071595 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.071619 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.071638 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:35Z","lastTransitionTime":"2026-02-02T17:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.173672 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.173718 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.173730 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.173748 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.173761 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:35Z","lastTransitionTime":"2026-02-02T17:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.277137 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.277175 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.277184 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.277199 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.277207 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:35Z","lastTransitionTime":"2026-02-02T17:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.379302 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.379354 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.379372 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.379396 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.379412 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:35Z","lastTransitionTime":"2026-02-02T17:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.408151 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 10:25:01.314112607 +0000 UTC Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.482035 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.482087 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.482114 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.482137 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.482155 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:35Z","lastTransitionTime":"2026-02-02T17:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.584749 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.584794 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.584816 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.584842 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.584860 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:35Z","lastTransitionTime":"2026-02-02T17:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.688823 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.688882 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.688901 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.688923 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.688940 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:35Z","lastTransitionTime":"2026-02-02T17:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.792522 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.792576 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.792595 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.792619 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.792639 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:35Z","lastTransitionTime":"2026-02-02T17:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.896174 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.896230 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.896246 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.896268 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:35 crc kubenswrapper[4858]: I0202 17:16:35.896288 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:35Z","lastTransitionTime":"2026-02-02T17:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.000202 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.000267 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.000286 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.000314 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.000336 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:36Z","lastTransitionTime":"2026-02-02T17:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.104040 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.104095 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.104112 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.104140 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.104159 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:36Z","lastTransitionTime":"2026-02-02T17:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.206589 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.206659 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.206676 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.206706 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.206723 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:36Z","lastTransitionTime":"2026-02-02T17:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.310314 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.310371 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.310387 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.310412 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.310429 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:36Z","lastTransitionTime":"2026-02-02T17:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.400360 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.400437 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.400488 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:16:36 crc kubenswrapper[4858]: E0202 17:16:36.400584 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.400665 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:16:36 crc kubenswrapper[4858]: E0202 17:16:36.400934 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:16:36 crc kubenswrapper[4858]: E0202 17:16:36.401156 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:16:36 crc kubenswrapper[4858]: E0202 17:16:36.401322 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.408836 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 01:49:13.452642092 +0000 UTC Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.412622 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.412655 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.412663 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.412678 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.412688 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:36Z","lastTransitionTime":"2026-02-02T17:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.514955 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.515009 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.515020 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.515035 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.515049 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:36Z","lastTransitionTime":"2026-02-02T17:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.617777 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.617876 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.617899 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.618030 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.618061 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:36Z","lastTransitionTime":"2026-02-02T17:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.720959 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.721055 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.721073 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.721098 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.721116 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:36Z","lastTransitionTime":"2026-02-02T17:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.824794 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.824833 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.824842 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.824857 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.824868 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:36Z","lastTransitionTime":"2026-02-02T17:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.927522 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.927626 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.927699 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.927746 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:36 crc kubenswrapper[4858]: I0202 17:16:36.927812 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:36Z","lastTransitionTime":"2026-02-02T17:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.030481 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.030545 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.030563 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.030585 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.030602 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:37Z","lastTransitionTime":"2026-02-02T17:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.133169 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.133222 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.133239 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.133261 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.133280 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:37Z","lastTransitionTime":"2026-02-02T17:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.236570 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.236607 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.236615 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.236629 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.236655 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:37Z","lastTransitionTime":"2026-02-02T17:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.339228 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.339298 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.339334 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.339353 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.339385 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:37Z","lastTransitionTime":"2026-02-02T17:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.409771 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 13:54:28.802097941 +0000 UTC Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.442478 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.442554 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.442565 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.442583 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.442597 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:37Z","lastTransitionTime":"2026-02-02T17:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.545301 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.545371 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.545381 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.545396 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.545406 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:37Z","lastTransitionTime":"2026-02-02T17:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.648123 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.648174 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.648186 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.648200 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.648211 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:37Z","lastTransitionTime":"2026-02-02T17:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.751367 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.751434 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.751456 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.751486 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.751508 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:37Z","lastTransitionTime":"2026-02-02T17:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.854473 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.854603 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.854635 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.854714 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.854741 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:37Z","lastTransitionTime":"2026-02-02T17:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.957412 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.957497 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.957523 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.957555 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.957577 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:37Z","lastTransitionTime":"2026-02-02T17:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.959463 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.959526 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.959544 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.959570 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.959590 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:37Z","lastTransitionTime":"2026-02-02T17:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:37 crc kubenswrapper[4858]: E0202 17:16:37.981644 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b1513e64-30b8-48d2-874a-29d4cc9d3b3d\\\",\\\"systemUUID\\\":\\\"152e49e6-c863-4d14-b212-d4d9f0b62e1a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:37Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.988488 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.988547 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.988573 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.988602 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:37 crc kubenswrapper[4858]: I0202 17:16:37.988624 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:37Z","lastTransitionTime":"2026-02-02T17:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:38 crc kubenswrapper[4858]: E0202 17:16:38.004891 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b1513e64-30b8-48d2-874a-29d4cc9d3b3d\\\",\\\"systemUUID\\\":\\\"152e49e6-c863-4d14-b212-d4d9f0b62e1a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:38Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.010009 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.010058 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.010075 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.010099 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.010116 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:38Z","lastTransitionTime":"2026-02-02T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:38 crc kubenswrapper[4858]: E0202 17:16:38.037170 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b1513e64-30b8-48d2-874a-29d4cc9d3b3d\\\",\\\"systemUUID\\\":\\\"152e49e6-c863-4d14-b212-d4d9f0b62e1a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:38Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.043941 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.044047 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.044073 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.044103 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.044126 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:38Z","lastTransitionTime":"2026-02-02T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:38 crc kubenswrapper[4858]: E0202 17:16:38.065642 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b1513e64-30b8-48d2-874a-29d4cc9d3b3d\\\",\\\"systemUUID\\\":\\\"152e49e6-c863-4d14-b212-d4d9f0b62e1a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:38Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.070843 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.070891 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.070908 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.070932 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.070951 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:38Z","lastTransitionTime":"2026-02-02T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:38 crc kubenswrapper[4858]: E0202 17:16:38.090898 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b1513e64-30b8-48d2-874a-29d4cc9d3b3d\\\",\\\"systemUUID\\\":\\\"152e49e6-c863-4d14-b212-d4d9f0b62e1a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:38Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:38 crc kubenswrapper[4858]: E0202 17:16:38.091201 4858 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.093515 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.093662 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.093766 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.093877 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.094014 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:38Z","lastTransitionTime":"2026-02-02T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.196305 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.196344 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.196357 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.196371 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.196379 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:38Z","lastTransitionTime":"2026-02-02T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.299155 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.299205 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.299219 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.299239 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.299256 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:38Z","lastTransitionTime":"2026-02-02T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.400432 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.400564 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:16:38 crc kubenswrapper[4858]: E0202 17:16:38.400646 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:16:38 crc kubenswrapper[4858]: E0202 17:16:38.400744 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.400855 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.401096 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:16:38 crc kubenswrapper[4858]: E0202 17:16:38.401220 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:16:38 crc kubenswrapper[4858]: E0202 17:16:38.400970 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.403781 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.403827 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.403843 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.403866 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.403884 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:38Z","lastTransitionTime":"2026-02-02T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.410398 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 19:44:07.854352736 +0000 UTC Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.506277 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.506329 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.506346 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.506398 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.506417 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:38Z","lastTransitionTime":"2026-02-02T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.609620 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.609696 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.609721 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.609753 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.609777 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:38Z","lastTransitionTime":"2026-02-02T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.674191 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122-metrics-certs\") pod \"network-metrics-daemon-t8jfm\" (UID: \"8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122\") " pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:16:38 crc kubenswrapper[4858]: E0202 17:16:38.674500 4858 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 17:16:38 crc kubenswrapper[4858]: E0202 17:16:38.674624 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122-metrics-certs podName:8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122 nodeName:}" failed. No retries permitted until 2026-02-02 17:17:42.674591908 +0000 UTC m=+163.827007203 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122-metrics-certs") pod "network-metrics-daemon-t8jfm" (UID: "8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.715809 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.715932 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.715962 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.716057 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.716103 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:38Z","lastTransitionTime":"2026-02-02T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.819510 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.819579 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.819603 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.819635 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.819657 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:38Z","lastTransitionTime":"2026-02-02T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.922927 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.923011 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.923029 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.923053 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:38 crc kubenswrapper[4858]: I0202 17:16:38.923074 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:38Z","lastTransitionTime":"2026-02-02T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.025417 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.025487 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.025512 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.025542 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.025564 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:39Z","lastTransitionTime":"2026-02-02T17:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.129216 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.129271 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.129289 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.129313 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.129331 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:39Z","lastTransitionTime":"2026-02-02T17:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.231835 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.231879 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.231894 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.231916 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.231932 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:39Z","lastTransitionTime":"2026-02-02T17:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.335336 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.335393 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.335410 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.335434 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.335450 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:39Z","lastTransitionTime":"2026-02-02T17:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.410602 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 22:06:43.503840138 +0000 UTC Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.437573 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.437614 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.437624 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.437641 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.437652 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:39Z","lastTransitionTime":"2026-02-02T17:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.540058 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.540096 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.540107 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.540131 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.540144 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:39Z","lastTransitionTime":"2026-02-02T17:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.643485 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.643542 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.643558 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.643583 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.643600 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:39Z","lastTransitionTime":"2026-02-02T17:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.746477 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.746537 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.746559 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.746587 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.746608 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:39Z","lastTransitionTime":"2026-02-02T17:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.850104 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.850166 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.850183 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.850205 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.850227 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:39Z","lastTransitionTime":"2026-02-02T17:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.953825 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.953915 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.953959 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.954029 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:39 crc kubenswrapper[4858]: I0202 17:16:39.954047 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:39Z","lastTransitionTime":"2026-02-02T17:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.057790 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.057870 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.057896 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.057926 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.057947 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:40Z","lastTransitionTime":"2026-02-02T17:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.160322 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.160680 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.160835 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.161029 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.161228 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:40Z","lastTransitionTime":"2026-02-02T17:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.263657 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.263713 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.263732 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.263759 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.263776 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:40Z","lastTransitionTime":"2026-02-02T17:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.366855 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.366923 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.366939 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.366965 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.367018 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:40Z","lastTransitionTime":"2026-02-02T17:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.400553 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.400621 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.400662 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:16:40 crc kubenswrapper[4858]: E0202 17:16:40.400799 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.400824 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:16:40 crc kubenswrapper[4858]: E0202 17:16:40.401053 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:16:40 crc kubenswrapper[4858]: E0202 17:16:40.401106 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:16:40 crc kubenswrapper[4858]: E0202 17:16:40.401191 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.411245 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 02:16:16.150434504 +0000 UTC Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.422574 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d52b7595-c2f6-4d7b-a076-db28e0332d49\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d64509908882d8a14b1836f1c667e92c12f407f5108009c8a63dd841d034afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://71ae002ad3b0e97ea0ba6f6c7a1bee332c5a1644dabbe1af5ccf1b9325356c77\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe23a7d9f061976dbd948d705fa9d73bd8a5f9f01304c4d966fcc4501d84cac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.445476 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.469143 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.471657 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.471703 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.471735 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.471753 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.471764 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:40Z","lastTransitionTime":"2026-02-02T17:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.484288 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03a4872-ca6a-4233-bdbf-b31f7890dc3e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80911936318b3a9de14a9f71a7437f0ee6c9fdcb56c2f4efc568a5543a6c86e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3060eb52c9c081c561e2b97588ea0418eadc0e227307715ef71d48fb42bbf57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fblg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lbvl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.515532 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce405d19-c944-4a11-8195-bca9289b8d73\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T17:16:16Z\\\",\\\"message\\\":\\\"hift.io/serving-cert-secret-name:dns-default-metrics-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{operator.openshift.io/v1 DNS default d8d88c7e-8c3e-49b6-8c5b-84aa454da2d7 0xc006ce8d37 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:dns,Protocol:UDP,Port:53,TargetPort:{1 0 dns},NodePort:0,AppProtocol:nil,},ServicePort{Name:dns-tcp,Protocol:TCP,Port:53,TargetPort:{1 0 dns-tcp},NodePort:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:9154,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{dns.operator.openshift.io/daemonset-dns: default,},ClusterIP:10.217.4.10,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.10],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0202 17:16:16.584589 6918 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:16:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wkm4w_openshift-ovn-kubernetes(ce405d19-c944-4a11-8195-bca9289b8d73)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-csh5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-wkm4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.539724 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6803a1fe-30ea-4c74-8a7f-743c3344d829\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925eeba4bab6bf2efe66b7e9bcc09451b9dd26cb3dc457fe4432b134ed68dfad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://845157900bb10cd29ec008f561118df2cbfcc30684394dbc201ad8a966aa8e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98e87a8afe124154c232c26f9a43969189b865225c0162799857847f10af1257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://16072ce1c24706c267a0341779f05e661bac827ae871ecc732f81923bb63bc99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c692114482fe72631826c6af5eb14c5ef0fdfc5500ed0ea3fd96d6009d5be562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47e8a4bda8b8d1eaa8151226f67cd39a31012435d83f5be86dd220fd2dbef5aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d2425044e7b5e2fd97a3fb1dfaf836233abdb754c9535276654c7bc5bd5b04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2f86447ee7b2349058d15ec28ae107ab57a99ebd6bfabd4485251687f1a094\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.560578 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://586c9f180c983beb2187348f2e68b1a8e550805c0f8035b72785f9845b00a14c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0629fa3631c6ec611ce9a9130532c9890bab8c4f4ca83042f9d4c8f1ec990690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.574655 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.574730 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.574741 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.574788 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.574803 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:40Z","lastTransitionTime":"2026-02-02T17:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.577723 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b070dee1026575fc13174bc4542b19d26a65f78951e524e035b6ce8be3d21e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.594671 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6hxtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f96e711-13fa-4105-b042-45fe046d3d35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca0363b669895519cf2438c9ea033dc0227b3f6fea175835586f9dc1252688c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zlbtc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6hxtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.604509 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ct8b7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76546fe8-0dad-45f2-aac1-2ec02ec40898\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0dc2e96af3e8efe18dbe2ac824f5305bfbc9fe047645fd495f9a6ca568ae1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggp5r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ct8b7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.618356 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9szlc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bc7963e-1bdc-4038-805e-bd72fc217a13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://485b3125d8343c8afd6e5d3b756e0a924c75da1d89c5e699ff825a8b46957bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a59b9be9d88ddb7f0a4b7c29467c616216c8b14502941cc1d6e43a922d903567\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T17:16:09Z\\\",\\\"message\\\":\\\"2026-02-02T17:15:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b3dbb378-edad-457f-9c64-6bd6330f05ec\\\\n2026-02-02T17:15:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b3dbb378-edad-457f-9c64-6bd6330f05ec to /host/opt/cni/bin/\\\\n2026-02-02T17:15:24Z [verbose] multus-daemon started\\\\n2026-02-02T17:15:24Z [verbose] Readiness Indicator file check\\\\n2026-02-02T17:16:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:16:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9szlc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.631271 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c423d6b-08b2-46d8-886e-3e7daea27bfc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d719263f90620c27bbf86fa46cc1140fd71aa3376d2c569ad8d07ff3e806ec1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c9312c3670640a2f0b63e269409ac820988ce1bac655c3a63f49c02fa88afd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e7c56ee2219f48c1296c4cb2e3cc2d390921a11d0faaf18ebcf1365dff431c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370e43e433fb3658a272ea20e7cb648bcbf3c309ff85f6e430e5773e15f5aa6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://370e43e433fb3658a272ea20e7cb648bcbf3c309ff85f6e430e5773e15f5aa6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.648950 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dce5b4fa8ae1fbd371b3a75bf6a2b061749d1b158fbf24b60a0a6fbebeee5e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.661692 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.672534 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhk49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c3c55b8-1193-47e3-ae7a-3a4b06df2884\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd7f6cd899675ca4ccd2da634b9c4865505db0ac1ca0f2ee8e2c65516fd2c190\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ksp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c9a3c7ca03f3b32cb484e20e016538920e05487b8c52cbe92c86e62ce30fe85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ksp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zhk49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.677017 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.677096 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.677111 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.677172 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.677194 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:40Z","lastTransitionTime":"2026-02-02T17:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.682048 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-t8jfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clqcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clqcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-t8jfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.690642 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6634f69b-ffaa-4e1d-8fe0-a19cc831ae4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28be335257f0bb593dbdf82cf32dd5da94532c27adb666a282caf1b7bc13b496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f7b0faa29b8fae2cba1cf6ffa6b5ef1761f8a36172383eef6aa214a2c0d881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f7b0faa29b8fae2cba1cf6ffa6b5ef1761f8a36172383eef6aa214a2c0d881\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.701377 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cc5a816-d9d0-41c0-877c-250e077ef445\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T17:15:19Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 17:15:14.099362 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 17:15:14.102453 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2083573188/tls.crt::/tmp/serving-cert-2083573188/tls.key\\\\\\\"\\\\nI0202 17:15:19.591422 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 17:15:19.597363 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 17:15:19.597387 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 17:15:19.597410 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 17:15:19.597416 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 17:15:19.647613 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 17:15:19.647820 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 17:15:19.647882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0202 17:15:19.647642 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0202 17:15:19.647909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 17:15:19.648000 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 17:15:19.648027 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0202 17:15:19.650075 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:03Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:00Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.713021 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341ca71a-aaf0-403c-8ecd-bbf2a70b031b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T17:15:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5580328d3eb1cbccbf31e6a18c6589dd3745f01a1451a9811a838f03a79ef444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4fe626aefd1d0b0109cf56a86a65360997fa864bf1280b9a20cedfdee3a560c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd4c8292ba3bfd3a7f8f941ca9c404936de1a01da137be157f68bae2f787a175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2417b37a14f8422341adaf1ad21256889c68fbd879e3bb8e3b9c5a1ccfc65114\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://973b56cc5a6d2ba670cd06f40623e563b79caf292adf3c4ee17e3a6ea38908b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f0f5464cf957856607b7421e83cf9cd39e0224366345a053f086821e8f069b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030bc2979ffeef39e9169d3ada37cde6d453f035a623005b1f31fcb029f7d319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T17:15:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T17:15:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plt28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T17:15:21Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6nv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.780149 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.780304 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.780342 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.780374 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.780445 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:40Z","lastTransitionTime":"2026-02-02T17:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.884139 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.884214 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.884235 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.884264 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.884286 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:40Z","lastTransitionTime":"2026-02-02T17:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.989933 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.990047 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.990073 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.990103 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:40 crc kubenswrapper[4858]: I0202 17:16:40.990127 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:40Z","lastTransitionTime":"2026-02-02T17:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.092361 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.092416 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.092429 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.092447 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.092459 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:41Z","lastTransitionTime":"2026-02-02T17:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.194767 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.194810 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.194820 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.194836 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.194848 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:41Z","lastTransitionTime":"2026-02-02T17:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.296464 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.296513 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.296523 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.296539 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.296551 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:41Z","lastTransitionTime":"2026-02-02T17:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.399750 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.399819 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.399838 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.399859 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.399876 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:41Z","lastTransitionTime":"2026-02-02T17:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.412244 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 22:15:36.492303235 +0000 UTC Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.503853 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.503934 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.503959 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.504025 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.504050 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:41Z","lastTransitionTime":"2026-02-02T17:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.606844 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.606906 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.606923 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.606947 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.606966 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:41Z","lastTransitionTime":"2026-02-02T17:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.709816 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.709953 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.710027 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.710098 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.710123 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:41Z","lastTransitionTime":"2026-02-02T17:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.812271 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.812343 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.812361 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.812385 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.812403 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:41Z","lastTransitionTime":"2026-02-02T17:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.915624 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.915669 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.915679 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.915694 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:41 crc kubenswrapper[4858]: I0202 17:16:41.915703 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:41Z","lastTransitionTime":"2026-02-02T17:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.018081 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.018148 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.018160 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.018182 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.018195 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:42Z","lastTransitionTime":"2026-02-02T17:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.121795 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.122062 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.122163 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.122231 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.122295 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:42Z","lastTransitionTime":"2026-02-02T17:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.224817 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.224878 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.224899 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.224922 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.224941 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:42Z","lastTransitionTime":"2026-02-02T17:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.328061 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.328123 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.328143 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.328167 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.328185 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:42Z","lastTransitionTime":"2026-02-02T17:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.400399 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.400504 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.400504 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:16:42 crc kubenswrapper[4858]: E0202 17:16:42.400635 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:16:42 crc kubenswrapper[4858]: E0202 17:16:42.400861 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.400909 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:16:42 crc kubenswrapper[4858]: E0202 17:16:42.401068 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:16:42 crc kubenswrapper[4858]: E0202 17:16:42.401358 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.409191 4858 scope.go:117] "RemoveContainer" containerID="b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6" Feb 02 17:16:42 crc kubenswrapper[4858]: E0202 17:16:42.410062 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wkm4w_openshift-ovn-kubernetes(ce405d19-c944-4a11-8195-bca9289b8d73)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.412556 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 02:24:19.625694655 +0000 UTC Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.430737 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.430795 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.430811 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.430830 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.430843 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:42Z","lastTransitionTime":"2026-02-02T17:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.533563 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.533652 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.533670 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.533695 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.533716 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:42Z","lastTransitionTime":"2026-02-02T17:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.637375 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.637442 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.637460 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.637532 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.637552 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:42Z","lastTransitionTime":"2026-02-02T17:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.740242 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.740308 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.740320 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.740337 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.740349 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:42Z","lastTransitionTime":"2026-02-02T17:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.844145 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.844202 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.844214 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.844232 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.844246 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:42Z","lastTransitionTime":"2026-02-02T17:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.948198 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.948250 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.948266 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.948288 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:42 crc kubenswrapper[4858]: I0202 17:16:42.948304 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:42Z","lastTransitionTime":"2026-02-02T17:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.051750 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.051828 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.051842 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.051859 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.051888 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:43Z","lastTransitionTime":"2026-02-02T17:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.154232 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.154267 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.154276 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.154290 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.154300 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:43Z","lastTransitionTime":"2026-02-02T17:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.257641 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.257699 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.257713 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.257734 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.257746 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:43Z","lastTransitionTime":"2026-02-02T17:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.360775 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.360841 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.360854 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.360873 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.360888 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:43Z","lastTransitionTime":"2026-02-02T17:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.413830 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 04:25:04.869745936 +0000 UTC Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.462944 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.463014 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.463023 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.463036 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.463057 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:43Z","lastTransitionTime":"2026-02-02T17:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.565349 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.565401 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.565410 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.565426 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.565436 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:43Z","lastTransitionTime":"2026-02-02T17:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.667393 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.667772 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.667874 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.667998 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.668143 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:43Z","lastTransitionTime":"2026-02-02T17:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.771423 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.771488 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.771511 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.771539 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.771560 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:43Z","lastTransitionTime":"2026-02-02T17:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.874682 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.874767 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.874784 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.874808 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.874825 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:43Z","lastTransitionTime":"2026-02-02T17:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.978329 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.978394 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.978405 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.978425 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:43 crc kubenswrapper[4858]: I0202 17:16:43.978436 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:43Z","lastTransitionTime":"2026-02-02T17:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.081058 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.081114 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.081127 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.081147 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.081158 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:44Z","lastTransitionTime":"2026-02-02T17:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.183217 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.183321 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.183342 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.183359 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.183370 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:44Z","lastTransitionTime":"2026-02-02T17:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.286087 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.286141 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.286155 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.286178 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.286190 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:44Z","lastTransitionTime":"2026-02-02T17:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.388500 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.388545 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.388556 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.388574 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.388589 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:44Z","lastTransitionTime":"2026-02-02T17:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.400207 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.400441 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.400408 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:16:44 crc kubenswrapper[4858]: E0202 17:16:44.400518 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:16:44 crc kubenswrapper[4858]: E0202 17:16:44.400572 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.400456 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:16:44 crc kubenswrapper[4858]: E0202 17:16:44.400703 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:16:44 crc kubenswrapper[4858]: E0202 17:16:44.401180 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.414406 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 21:54:02.140911963 +0000 UTC Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.492119 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.492160 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.492171 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.492208 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.492221 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:44Z","lastTransitionTime":"2026-02-02T17:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.594521 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.594600 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.594614 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.594631 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.594642 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:44Z","lastTransitionTime":"2026-02-02T17:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.697316 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.697361 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.697381 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.697396 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.697406 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:44Z","lastTransitionTime":"2026-02-02T17:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.799702 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.799762 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.799778 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.799802 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.799829 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:44Z","lastTransitionTime":"2026-02-02T17:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.902802 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.902874 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.902895 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.902924 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:44 crc kubenswrapper[4858]: I0202 17:16:44.902951 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:44Z","lastTransitionTime":"2026-02-02T17:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.008491 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.010345 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.010930 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.011224 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.011678 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:45Z","lastTransitionTime":"2026-02-02T17:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.115268 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.115326 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.115340 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.115360 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.115406 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:45Z","lastTransitionTime":"2026-02-02T17:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.218594 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.218659 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.218677 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.218702 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.218718 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:45Z","lastTransitionTime":"2026-02-02T17:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.321448 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.321520 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.321533 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.321572 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.321585 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:45Z","lastTransitionTime":"2026-02-02T17:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.415186 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 07:18:09.406531753 +0000 UTC Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.424753 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.424824 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.424842 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.424867 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.424888 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:45Z","lastTransitionTime":"2026-02-02T17:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.528286 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.528335 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.528345 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.528366 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.528378 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:45Z","lastTransitionTime":"2026-02-02T17:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.631462 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.631494 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.631502 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.631515 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.631525 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:45Z","lastTransitionTime":"2026-02-02T17:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.734235 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.734313 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.734337 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.734371 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.734390 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:45Z","lastTransitionTime":"2026-02-02T17:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.837255 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.837319 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.837335 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.837365 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.837383 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:45Z","lastTransitionTime":"2026-02-02T17:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.940329 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.940387 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.940402 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.940427 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:45 crc kubenswrapper[4858]: I0202 17:16:45.940514 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:45Z","lastTransitionTime":"2026-02-02T17:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.043203 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.043288 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.043315 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.043340 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.043371 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:46Z","lastTransitionTime":"2026-02-02T17:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.146711 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.146749 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.146761 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.146777 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.146787 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:46Z","lastTransitionTime":"2026-02-02T17:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.249083 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.249124 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.249135 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.249150 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.249161 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:46Z","lastTransitionTime":"2026-02-02T17:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.351394 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.351474 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.351499 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.351532 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.351558 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:46Z","lastTransitionTime":"2026-02-02T17:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.400443 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:16:46 crc kubenswrapper[4858]: E0202 17:16:46.400579 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.400443 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.400633 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:16:46 crc kubenswrapper[4858]: E0202 17:16:46.400763 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:16:46 crc kubenswrapper[4858]: E0202 17:16:46.401063 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.401103 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:16:46 crc kubenswrapper[4858]: E0202 17:16:46.401230 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.416022 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 09:34:04.210073246 +0000 UTC Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.454498 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.454538 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.454551 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.454566 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.454576 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:46Z","lastTransitionTime":"2026-02-02T17:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.557136 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.557177 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.557189 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.557205 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.557217 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:46Z","lastTransitionTime":"2026-02-02T17:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.660523 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.660589 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.660607 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.660634 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.660654 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:46Z","lastTransitionTime":"2026-02-02T17:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.763641 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.763690 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.763700 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.763716 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.763727 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:46Z","lastTransitionTime":"2026-02-02T17:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.866617 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.866662 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.866672 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.866687 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.866701 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:46Z","lastTransitionTime":"2026-02-02T17:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.969146 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.969192 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.969202 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.969221 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:46 crc kubenswrapper[4858]: I0202 17:16:46.969231 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:46Z","lastTransitionTime":"2026-02-02T17:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.071743 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.071798 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.071815 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.071839 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.071856 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:47Z","lastTransitionTime":"2026-02-02T17:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.175082 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.175137 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.175156 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.175178 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.175195 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:47Z","lastTransitionTime":"2026-02-02T17:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.278278 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.278734 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.278896 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.279088 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.279219 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:47Z","lastTransitionTime":"2026-02-02T17:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.382763 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.383206 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.383354 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.383556 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.383701 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:47Z","lastTransitionTime":"2026-02-02T17:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.416168 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 01:35:28.830610655 +0000 UTC Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.487319 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.487387 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.487405 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.487430 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.487449 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:47Z","lastTransitionTime":"2026-02-02T17:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.590288 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.590340 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.590350 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.590369 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.590381 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:47Z","lastTransitionTime":"2026-02-02T17:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.693203 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.693259 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.693276 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.693299 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.693316 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:47Z","lastTransitionTime":"2026-02-02T17:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.795430 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.795491 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.795513 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.795544 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.795567 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:47Z","lastTransitionTime":"2026-02-02T17:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.898797 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.898917 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.898936 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.898958 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:47 crc kubenswrapper[4858]: I0202 17:16:47.898998 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:47Z","lastTransitionTime":"2026-02-02T17:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.001519 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.001579 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.001596 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.001620 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.001637 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:48Z","lastTransitionTime":"2026-02-02T17:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.104633 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.104676 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.104693 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.104716 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.104734 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:48Z","lastTransitionTime":"2026-02-02T17:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.207679 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.207723 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.207738 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.207761 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.207777 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:48Z","lastTransitionTime":"2026-02-02T17:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.310022 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.310377 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.310572 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.310756 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.310914 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:48Z","lastTransitionTime":"2026-02-02T17:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.400247 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.400360 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:16:48 crc kubenswrapper[4858]: E0202 17:16:48.401014 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.400552 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.400495 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:16:48 crc kubenswrapper[4858]: E0202 17:16:48.401600 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:16:48 crc kubenswrapper[4858]: E0202 17:16:48.402053 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:16:48 crc kubenswrapper[4858]: E0202 17:16:48.402195 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.413606 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.413670 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.413686 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.413708 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.413725 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:48Z","lastTransitionTime":"2026-02-02T17:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.416817 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 23:44:31.704239183 +0000 UTC Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.474067 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.474113 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.474124 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.474140 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.474151 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:48Z","lastTransitionTime":"2026-02-02T17:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:48 crc kubenswrapper[4858]: E0202 17:16:48.491169 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T17:16:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T17:16:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b1513e64-30b8-48d2-874a-29d4cc9d3b3d\\\",\\\"systemUUID\\\":\\\"152e49e6-c863-4d14-b212-d4d9f0b62e1a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T17:16:48Z is after 2025-08-24T17:21:41Z" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.495955 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.496055 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.496077 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.496099 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.496116 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:48Z","lastTransitionTime":"2026-02-02T17:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.530370 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.530423 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.530440 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.530463 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.530480 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T17:16:48Z","lastTransitionTime":"2026-02-02T17:16:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.557579 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-qzj8h"] Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.558273 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qzj8h" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.561586 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.562141 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.562623 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.563045 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.597576 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=85.597553877 podStartE2EDuration="1m25.597553877s" podCreationTimestamp="2026-02-02 17:15:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:16:48.597302981 +0000 UTC m=+109.749718286" watchObservedRunningTime="2026-02-02 17:16:48.597553877 +0000 UTC m=+109.749969142" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.646285 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-6hxtm" podStartSLOduration=89.645459108 podStartE2EDuration="1m29.645459108s" podCreationTimestamp="2026-02-02 17:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:16:48.645237132 +0000 UTC m=+109.797652417" watchObservedRunningTime="2026-02-02 17:16:48.645459108 +0000 UTC m=+109.797874413" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.660263 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-ct8b7" podStartSLOduration=88.660246018 podStartE2EDuration="1m28.660246018s" podCreationTimestamp="2026-02-02 17:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:16:48.658818789 +0000 UTC m=+109.811234054" watchObservedRunningTime="2026-02-02 17:16:48.660246018 +0000 UTC m=+109.812661293" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.673838 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-9szlc" podStartSLOduration=88.673818215 podStartE2EDuration="1m28.673818215s" podCreationTimestamp="2026-02-02 17:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:16:48.673606629 +0000 UTC m=+109.826021894" watchObservedRunningTime="2026-02-02 17:16:48.673818215 +0000 UTC m=+109.826233510" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.687088 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d7efd01-8a4a-48e4-b415-2ad072fb15b2-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qzj8h\" (UID: \"8d7efd01-8a4a-48e4-b415-2ad072fb15b2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qzj8h" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.690056 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d7efd01-8a4a-48e4-b415-2ad072fb15b2-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qzj8h\" (UID: \"8d7efd01-8a4a-48e4-b415-2ad072fb15b2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qzj8h" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.690298 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8d7efd01-8a4a-48e4-b415-2ad072fb15b2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qzj8h\" (UID: \"8d7efd01-8a4a-48e4-b415-2ad072fb15b2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qzj8h" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.690871 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8d7efd01-8a4a-48e4-b415-2ad072fb15b2-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qzj8h\" (UID: \"8d7efd01-8a4a-48e4-b415-2ad072fb15b2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qzj8h" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.691271 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8d7efd01-8a4a-48e4-b415-2ad072fb15b2-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qzj8h\" (UID: \"8d7efd01-8a4a-48e4-b415-2ad072fb15b2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qzj8h" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.691456 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podStartSLOduration=88.691439715 podStartE2EDuration="1m28.691439715s" podCreationTimestamp="2026-02-02 17:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:16:48.68766738 +0000 UTC m=+109.840082675" watchObservedRunningTime="2026-02-02 17:16:48.691439715 +0000 UTC m=+109.843854980" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.769613 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=54.769592545 podStartE2EDuration="54.769592545s" podCreationTimestamp="2026-02-02 17:15:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:16:48.76941655 +0000 UTC m=+109.921831815" watchObservedRunningTime="2026-02-02 17:16:48.769592545 +0000 UTC m=+109.922007810" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.792105 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d7efd01-8a4a-48e4-b415-2ad072fb15b2-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qzj8h\" (UID: \"8d7efd01-8a4a-48e4-b415-2ad072fb15b2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qzj8h" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.792388 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d7efd01-8a4a-48e4-b415-2ad072fb15b2-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qzj8h\" (UID: \"8d7efd01-8a4a-48e4-b415-2ad072fb15b2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qzj8h" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.792536 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8d7efd01-8a4a-48e4-b415-2ad072fb15b2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qzj8h\" (UID: \"8d7efd01-8a4a-48e4-b415-2ad072fb15b2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qzj8h" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.792632 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8d7efd01-8a4a-48e4-b415-2ad072fb15b2-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qzj8h\" (UID: \"8d7efd01-8a4a-48e4-b415-2ad072fb15b2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qzj8h" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.792740 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8d7efd01-8a4a-48e4-b415-2ad072fb15b2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qzj8h\" (UID: \"8d7efd01-8a4a-48e4-b415-2ad072fb15b2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qzj8h" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.792754 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8d7efd01-8a4a-48e4-b415-2ad072fb15b2-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qzj8h\" (UID: \"8d7efd01-8a4a-48e4-b415-2ad072fb15b2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qzj8h" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.792896 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8d7efd01-8a4a-48e4-b415-2ad072fb15b2-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qzj8h\" (UID: \"8d7efd01-8a4a-48e4-b415-2ad072fb15b2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qzj8h" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.794228 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8d7efd01-8a4a-48e4-b415-2ad072fb15b2-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qzj8h\" (UID: \"8d7efd01-8a4a-48e4-b415-2ad072fb15b2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qzj8h" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.801235 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d7efd01-8a4a-48e4-b415-2ad072fb15b2-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qzj8h\" (UID: \"8d7efd01-8a4a-48e4-b415-2ad072fb15b2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qzj8h" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.820210 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d7efd01-8a4a-48e4-b415-2ad072fb15b2-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qzj8h\" (UID: \"8d7efd01-8a4a-48e4-b415-2ad072fb15b2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qzj8h" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.826503 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zhk49" podStartSLOduration=87.826475654 podStartE2EDuration="1m27.826475654s" podCreationTimestamp="2026-02-02 17:15:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:16:48.826194337 +0000 UTC m=+109.978609602" watchObservedRunningTime="2026-02-02 17:16:48.826475654 +0000 UTC m=+109.978890959" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.873829 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=18.873807889 podStartE2EDuration="18.873807889s" podCreationTimestamp="2026-02-02 17:16:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:16:48.873708616 +0000 UTC m=+110.026123931" watchObservedRunningTime="2026-02-02 17:16:48.873807889 +0000 UTC m=+110.026223154" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.879912 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qzj8h" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.891183 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=88.891164281 podStartE2EDuration="1m28.891164281s" podCreationTimestamp="2026-02-02 17:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:16:48.890952145 +0000 UTC m=+110.043367450" watchObservedRunningTime="2026-02-02 17:16:48.891164281 +0000 UTC m=+110.043579556" Feb 02 17:16:48 crc kubenswrapper[4858]: W0202 17:16:48.901614 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d7efd01_8a4a_48e4_b415_2ad072fb15b2.slice/crio-be629921e9259ccb32feb08238221f7aae4b66a5b02d8917b69593ad7403ba5c WatchSource:0}: Error finding container be629921e9259ccb32feb08238221f7aae4b66a5b02d8917b69593ad7403ba5c: Status 404 returned error can't find the container with id be629921e9259ccb32feb08238221f7aae4b66a5b02d8917b69593ad7403ba5c Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.937100 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=89.937076656 podStartE2EDuration="1m29.937076656s" podCreationTimestamp="2026-02-02 17:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:16:48.934741151 +0000 UTC m=+110.087156416" watchObservedRunningTime="2026-02-02 17:16:48.937076656 +0000 UTC m=+110.089491921" Feb 02 17:16:48 crc kubenswrapper[4858]: I0202 17:16:48.937284 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-6nv4v" podStartSLOduration=88.937280371 podStartE2EDuration="1m28.937280371s" podCreationTimestamp="2026-02-02 17:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:16:48.912538054 +0000 UTC m=+110.064953319" watchObservedRunningTime="2026-02-02 17:16:48.937280371 +0000 UTC m=+110.089695636" Feb 02 17:16:49 crc kubenswrapper[4858]: I0202 17:16:49.015325 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qzj8h" event={"ID":"8d7efd01-8a4a-48e4-b415-2ad072fb15b2","Type":"ContainerStarted","Data":"be629921e9259ccb32feb08238221f7aae4b66a5b02d8917b69593ad7403ba5c"} Feb 02 17:16:49 crc kubenswrapper[4858]: I0202 17:16:49.417461 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 10:07:57.241600464 +0000 UTC Feb 02 17:16:49 crc kubenswrapper[4858]: I0202 17:16:49.417545 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 02 17:16:49 crc kubenswrapper[4858]: I0202 17:16:49.426690 4858 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 02 17:16:50 crc kubenswrapper[4858]: I0202 17:16:50.021298 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qzj8h" event={"ID":"8d7efd01-8a4a-48e4-b415-2ad072fb15b2","Type":"ContainerStarted","Data":"b318b0c6fd405e7e2741619b18ca2194d301bd1aed0b6e49d5cecddcef55effb"} Feb 02 17:16:50 crc kubenswrapper[4858]: I0202 17:16:50.038245 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qzj8h" podStartSLOduration=90.038223784 podStartE2EDuration="1m30.038223784s" podCreationTimestamp="2026-02-02 17:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:16:50.036355102 +0000 UTC m=+111.188770407" watchObservedRunningTime="2026-02-02 17:16:50.038223784 +0000 UTC m=+111.190639089" Feb 02 17:16:50 crc kubenswrapper[4858]: I0202 17:16:50.400110 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:16:50 crc kubenswrapper[4858]: E0202 17:16:50.402060 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:16:50 crc kubenswrapper[4858]: I0202 17:16:50.402135 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:16:50 crc kubenswrapper[4858]: I0202 17:16:50.402162 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:16:50 crc kubenswrapper[4858]: I0202 17:16:50.402231 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:16:50 crc kubenswrapper[4858]: E0202 17:16:50.402568 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:16:50 crc kubenswrapper[4858]: E0202 17:16:50.402730 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:16:50 crc kubenswrapper[4858]: E0202 17:16:50.402866 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:16:52 crc kubenswrapper[4858]: I0202 17:16:52.399938 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:16:52 crc kubenswrapper[4858]: E0202 17:16:52.400179 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:16:52 crc kubenswrapper[4858]: I0202 17:16:52.400480 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:16:52 crc kubenswrapper[4858]: E0202 17:16:52.400566 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:16:52 crc kubenswrapper[4858]: I0202 17:16:52.400761 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:16:52 crc kubenswrapper[4858]: E0202 17:16:52.400845 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:16:52 crc kubenswrapper[4858]: I0202 17:16:52.401115 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:16:52 crc kubenswrapper[4858]: E0202 17:16:52.401212 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:16:54 crc kubenswrapper[4858]: I0202 17:16:54.400637 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:16:54 crc kubenswrapper[4858]: I0202 17:16:54.400683 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:16:54 crc kubenswrapper[4858]: E0202 17:16:54.400829 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:16:54 crc kubenswrapper[4858]: I0202 17:16:54.400855 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:16:54 crc kubenswrapper[4858]: E0202 17:16:54.400956 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:16:54 crc kubenswrapper[4858]: I0202 17:16:54.401040 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:16:54 crc kubenswrapper[4858]: E0202 17:16:54.401118 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:16:54 crc kubenswrapper[4858]: E0202 17:16:54.401196 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:16:54 crc kubenswrapper[4858]: I0202 17:16:54.402092 4858 scope.go:117] "RemoveContainer" containerID="b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6" Feb 02 17:16:54 crc kubenswrapper[4858]: E0202 17:16:54.402305 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-wkm4w_openshift-ovn-kubernetes(ce405d19-c944-4a11-8195-bca9289b8d73)\"" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" Feb 02 17:16:56 crc kubenswrapper[4858]: I0202 17:16:56.042098 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9szlc_4bc7963e-1bdc-4038-805e-bd72fc217a13/kube-multus/1.log" Feb 02 17:16:56 crc kubenswrapper[4858]: I0202 17:16:56.043183 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9szlc_4bc7963e-1bdc-4038-805e-bd72fc217a13/kube-multus/0.log" Feb 02 17:16:56 crc kubenswrapper[4858]: I0202 17:16:56.043246 4858 generic.go:334] "Generic (PLEG): container finished" podID="4bc7963e-1bdc-4038-805e-bd72fc217a13" containerID="485b3125d8343c8afd6e5d3b756e0a924c75da1d89c5e699ff825a8b46957bb7" exitCode=1 Feb 02 17:16:56 crc kubenswrapper[4858]: I0202 17:16:56.043286 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9szlc" event={"ID":"4bc7963e-1bdc-4038-805e-bd72fc217a13","Type":"ContainerDied","Data":"485b3125d8343c8afd6e5d3b756e0a924c75da1d89c5e699ff825a8b46957bb7"} Feb 02 17:16:56 crc kubenswrapper[4858]: I0202 17:16:56.043330 4858 scope.go:117] "RemoveContainer" containerID="a59b9be9d88ddb7f0a4b7c29467c616216c8b14502941cc1d6e43a922d903567" Feb 02 17:16:56 crc kubenswrapper[4858]: I0202 17:16:56.043906 4858 scope.go:117] "RemoveContainer" containerID="485b3125d8343c8afd6e5d3b756e0a924c75da1d89c5e699ff825a8b46957bb7" Feb 02 17:16:56 crc kubenswrapper[4858]: E0202 17:16:56.044531 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-9szlc_openshift-multus(4bc7963e-1bdc-4038-805e-bd72fc217a13)\"" pod="openshift-multus/multus-9szlc" podUID="4bc7963e-1bdc-4038-805e-bd72fc217a13" Feb 02 17:16:56 crc kubenswrapper[4858]: I0202 17:16:56.400446 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:16:56 crc kubenswrapper[4858]: I0202 17:16:56.400592 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:16:56 crc kubenswrapper[4858]: I0202 17:16:56.400936 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:16:56 crc kubenswrapper[4858]: I0202 17:16:56.401040 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:16:56 crc kubenswrapper[4858]: E0202 17:16:56.401255 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:16:56 crc kubenswrapper[4858]: E0202 17:16:56.401430 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:16:56 crc kubenswrapper[4858]: E0202 17:16:56.401567 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:16:56 crc kubenswrapper[4858]: E0202 17:16:56.401685 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:16:57 crc kubenswrapper[4858]: I0202 17:16:57.049130 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9szlc_4bc7963e-1bdc-4038-805e-bd72fc217a13/kube-multus/1.log" Feb 02 17:16:58 crc kubenswrapper[4858]: I0202 17:16:58.400689 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:16:58 crc kubenswrapper[4858]: I0202 17:16:58.400814 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:16:58 crc kubenswrapper[4858]: E0202 17:16:58.400897 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:16:58 crc kubenswrapper[4858]: I0202 17:16:58.400938 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:16:58 crc kubenswrapper[4858]: E0202 17:16:58.401150 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:16:58 crc kubenswrapper[4858]: I0202 17:16:58.401200 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:16:58 crc kubenswrapper[4858]: E0202 17:16:58.401296 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:16:58 crc kubenswrapper[4858]: E0202 17:16:58.401469 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:17:00 crc kubenswrapper[4858]: E0202 17:17:00.361011 4858 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 02 17:17:00 crc kubenswrapper[4858]: I0202 17:17:00.399840 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:17:00 crc kubenswrapper[4858]: I0202 17:17:00.399933 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:17:00 crc kubenswrapper[4858]: E0202 17:17:00.402220 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:17:00 crc kubenswrapper[4858]: I0202 17:17:00.402274 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:17:00 crc kubenswrapper[4858]: E0202 17:17:00.402416 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:17:00 crc kubenswrapper[4858]: I0202 17:17:00.402313 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:17:00 crc kubenswrapper[4858]: E0202 17:17:00.402589 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:17:00 crc kubenswrapper[4858]: E0202 17:17:00.402666 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:17:00 crc kubenswrapper[4858]: E0202 17:17:00.501185 4858 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 17:17:02 crc kubenswrapper[4858]: I0202 17:17:02.400402 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:17:02 crc kubenswrapper[4858]: I0202 17:17:02.400486 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:17:02 crc kubenswrapper[4858]: I0202 17:17:02.400566 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:17:02 crc kubenswrapper[4858]: E0202 17:17:02.400890 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:17:02 crc kubenswrapper[4858]: I0202 17:17:02.400907 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:17:02 crc kubenswrapper[4858]: E0202 17:17:02.401038 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:17:02 crc kubenswrapper[4858]: E0202 17:17:02.401141 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:17:02 crc kubenswrapper[4858]: E0202 17:17:02.401204 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:17:04 crc kubenswrapper[4858]: I0202 17:17:04.400388 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:17:04 crc kubenswrapper[4858]: I0202 17:17:04.400463 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:17:04 crc kubenswrapper[4858]: I0202 17:17:04.400420 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:17:04 crc kubenswrapper[4858]: E0202 17:17:04.400770 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:17:04 crc kubenswrapper[4858]: E0202 17:17:04.400609 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:17:04 crc kubenswrapper[4858]: I0202 17:17:04.400844 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:17:04 crc kubenswrapper[4858]: E0202 17:17:04.400923 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:17:04 crc kubenswrapper[4858]: E0202 17:17:04.401029 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:17:05 crc kubenswrapper[4858]: E0202 17:17:05.503208 4858 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 17:17:06 crc kubenswrapper[4858]: I0202 17:17:06.400408 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:17:06 crc kubenswrapper[4858]: I0202 17:17:06.400493 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:17:06 crc kubenswrapper[4858]: I0202 17:17:06.401083 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:17:06 crc kubenswrapper[4858]: I0202 17:17:06.401354 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:17:06 crc kubenswrapper[4858]: E0202 17:17:06.401545 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:17:06 crc kubenswrapper[4858]: E0202 17:17:06.401877 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:17:06 crc kubenswrapper[4858]: I0202 17:17:06.401921 4858 scope.go:117] "RemoveContainer" containerID="b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6" Feb 02 17:17:06 crc kubenswrapper[4858]: E0202 17:17:06.402084 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:17:06 crc kubenswrapper[4858]: E0202 17:17:06.402201 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:17:07 crc kubenswrapper[4858]: I0202 17:17:07.089549 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wkm4w_ce405d19-c944-4a11-8195-bca9289b8d73/ovnkube-controller/3.log" Feb 02 17:17:07 crc kubenswrapper[4858]: I0202 17:17:07.092202 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" event={"ID":"ce405d19-c944-4a11-8195-bca9289b8d73","Type":"ContainerStarted","Data":"f99bb1927da4a834c76c5c179c4171aa7eec841da92a7caaec1e4451ae36e321"} Feb 02 17:17:07 crc kubenswrapper[4858]: I0202 17:17:07.092758 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:17:07 crc kubenswrapper[4858]: I0202 17:17:07.120010 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" podStartSLOduration=107.119956236 podStartE2EDuration="1m47.119956236s" podCreationTimestamp="2026-02-02 17:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:17:07.119635087 +0000 UTC m=+128.272050412" watchObservedRunningTime="2026-02-02 17:17:07.119956236 +0000 UTC m=+128.272371541" Feb 02 17:17:07 crc kubenswrapper[4858]: I0202 17:17:07.415612 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-t8jfm"] Feb 02 17:17:07 crc kubenswrapper[4858]: I0202 17:17:07.415773 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:17:07 crc kubenswrapper[4858]: E0202 17:17:07.415911 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:17:08 crc kubenswrapper[4858]: I0202 17:17:08.399962 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:17:08 crc kubenswrapper[4858]: E0202 17:17:08.400445 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:17:08 crc kubenswrapper[4858]: I0202 17:17:08.400187 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:17:08 crc kubenswrapper[4858]: I0202 17:17:08.400019 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:17:08 crc kubenswrapper[4858]: E0202 17:17:08.400574 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:17:08 crc kubenswrapper[4858]: E0202 17:17:08.400715 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:17:09 crc kubenswrapper[4858]: I0202 17:17:09.400460 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:17:09 crc kubenswrapper[4858]: E0202 17:17:09.400597 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:17:10 crc kubenswrapper[4858]: I0202 17:17:10.399691 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:17:10 crc kubenswrapper[4858]: I0202 17:17:10.399810 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:17:10 crc kubenswrapper[4858]: I0202 17:17:10.400338 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:17:10 crc kubenswrapper[4858]: E0202 17:17:10.402192 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:17:10 crc kubenswrapper[4858]: E0202 17:17:10.402285 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:17:10 crc kubenswrapper[4858]: E0202 17:17:10.402402 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:17:10 crc kubenswrapper[4858]: E0202 17:17:10.503863 4858 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 17:17:11 crc kubenswrapper[4858]: I0202 17:17:11.400137 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:17:11 crc kubenswrapper[4858]: E0202 17:17:11.400348 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:17:11 crc kubenswrapper[4858]: I0202 17:17:11.400470 4858 scope.go:117] "RemoveContainer" containerID="485b3125d8343c8afd6e5d3b756e0a924c75da1d89c5e699ff825a8b46957bb7" Feb 02 17:17:12 crc kubenswrapper[4858]: I0202 17:17:12.111822 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9szlc_4bc7963e-1bdc-4038-805e-bd72fc217a13/kube-multus/1.log" Feb 02 17:17:12 crc kubenswrapper[4858]: I0202 17:17:12.112141 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9szlc" event={"ID":"4bc7963e-1bdc-4038-805e-bd72fc217a13","Type":"ContainerStarted","Data":"dc80d934773e7a1085767db5e7b28c615ef7491dfabb021de55cba2328bca076"} Feb 02 17:17:12 crc kubenswrapper[4858]: I0202 17:17:12.400027 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:17:12 crc kubenswrapper[4858]: I0202 17:17:12.400136 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:17:12 crc kubenswrapper[4858]: E0202 17:17:12.400166 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:17:12 crc kubenswrapper[4858]: I0202 17:17:12.400145 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:17:12 crc kubenswrapper[4858]: E0202 17:17:12.400451 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:17:12 crc kubenswrapper[4858]: E0202 17:17:12.400584 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:17:13 crc kubenswrapper[4858]: I0202 17:17:13.400691 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:17:13 crc kubenswrapper[4858]: E0202 17:17:13.400895 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:17:14 crc kubenswrapper[4858]: I0202 17:17:14.400174 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:17:14 crc kubenswrapper[4858]: I0202 17:17:14.400217 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:17:14 crc kubenswrapper[4858]: I0202 17:17:14.400259 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:17:14 crc kubenswrapper[4858]: E0202 17:17:14.400412 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 17:17:14 crc kubenswrapper[4858]: E0202 17:17:14.400582 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 17:17:14 crc kubenswrapper[4858]: E0202 17:17:14.400681 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 17:17:15 crc kubenswrapper[4858]: I0202 17:17:15.400045 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:17:15 crc kubenswrapper[4858]: E0202 17:17:15.400274 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-t8jfm" podUID="8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122" Feb 02 17:17:16 crc kubenswrapper[4858]: I0202 17:17:16.399887 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:17:16 crc kubenswrapper[4858]: I0202 17:17:16.399925 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:17:16 crc kubenswrapper[4858]: I0202 17:17:16.400016 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:17:16 crc kubenswrapper[4858]: I0202 17:17:16.402271 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 02 17:17:16 crc kubenswrapper[4858]: I0202 17:17:16.402382 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 02 17:17:16 crc kubenswrapper[4858]: I0202 17:17:16.403471 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 02 17:17:16 crc kubenswrapper[4858]: I0202 17:17:16.403666 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 02 17:17:16 crc kubenswrapper[4858]: I0202 17:17:16.501707 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:17:17 crc kubenswrapper[4858]: I0202 17:17:17.400212 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:17:17 crc kubenswrapper[4858]: I0202 17:17:17.402820 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 02 17:17:17 crc kubenswrapper[4858]: I0202 17:17:17.403497 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.193379 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.234900 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7c9rp"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.235581 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-7c9rp" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.235945 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9wxc4"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.236484 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.238885 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.238939 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.238961 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.238979 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.241332 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.241594 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.241790 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.242078 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.242204 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.242477 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.242552 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.242631 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.242683 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-ssvjj"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.242931 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.243089 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.243130 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.243359 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-ssvjj" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.243457 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.247561 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.248099 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.248553 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.248705 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.249427 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.249621 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.252509 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-lvvvj"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.253108 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lvvvj" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.259048 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.259357 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.259558 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.259668 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.259752 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.266216 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.266872 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.267778 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.267865 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.268870 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.271806 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9t6d4"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.274981 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9t6d4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.292823 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.293405 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-zq59f"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.294245 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zq59f" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.295231 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-ssvjj"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.295420 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.295593 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.295815 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.296657 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d75bl"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.297015 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d75bl" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.298182 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.298377 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.298629 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.298750 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.298822 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.302092 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9n5ph"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.302734 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-npxd5"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.303652 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.303803 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.303933 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.304053 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.304155 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.304237 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.304397 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.304599 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.304777 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.306478 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.306688 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-zww4k"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.307046 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7c9rp"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.307070 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-lvvvj"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.307181 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zww4k" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.307333 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-npxd5" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.311885 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9wxc4"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.312945 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.313039 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.313457 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.313616 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.313730 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.313852 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.313973 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.314202 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9t6d4"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.314981 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6l265"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.315441 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.315656 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.315758 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6l265" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.315844 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.315660 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-45snr"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.316512 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.316605 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-45snr" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.317074 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j5zlt"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.317845 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.317945 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-ffh76"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.318556 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-ffh76" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.321071 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9n9ld"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.321622 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9n9ld" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.321792 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.321963 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-rnrz8"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.322286 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.322380 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.322599 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-rnrz8" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.326099 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-jw86f"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.326962 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-jw86f" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.327043 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-sscgm"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.328448 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.328780 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.329300 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d75bl"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.329464 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.329884 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.329923 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.330142 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.330353 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.330733 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-frw2d"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.330786 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.330916 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.331065 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.331188 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.331291 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.331340 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.331356 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-frw2d" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.331449 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.331618 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.331913 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.332096 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.332154 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.332199 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.332275 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.332381 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.332414 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.332463 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.332490 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.332541 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.332563 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.332603 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.333699 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.334199 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-sscgm" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.334928 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.335132 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.335373 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.335684 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.355333 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rph8"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.356195 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2sv9"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.356864 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2sv9" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.357375 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rph8" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.363750 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-audit-policies\") pod \"oauth-openshift-558db77b4-9n5ph\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.363832 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-9n5ph\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.363864 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f4d39c6c-15e3-48a3-82be-2bc3703dbc7f-images\") pod \"machine-api-operator-5694c8668f-ssvjj\" (UID: \"f4d39c6c-15e3-48a3-82be-2bc3703dbc7f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ssvjj" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.363915 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-9n5ph\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.363948 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sspcj\" (UniqueName: \"kubernetes.io/projected/8d6a3975-f77c-4e1d-bc4a-9f34708d2421-kube-api-access-sspcj\") pod \"openshift-apiserver-operator-796bbdcf4f-d75bl\" (UID: \"8d6a3975-f77c-4e1d-bc4a-9f34708d2421\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d75bl" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.364105 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/84734edc-960c-4a16-9281-b10a1dc0a710-service-ca\") pod \"console-f9d7485db-zww4k\" (UID: \"84734edc-960c-4a16-9281-b10a1dc0a710\") " pod="openshift-console/console-f9d7485db-zww4k" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.364133 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l499v\" (UniqueName: \"kubernetes.io/projected/5109f31b-0e6e-447b-90cc-78ebbc465626-kube-api-access-l499v\") pod \"apiserver-7bbb656c7d-lvvvj\" (UID: \"5109f31b-0e6e-447b-90cc-78ebbc465626\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lvvvj" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.364165 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5109f31b-0e6e-447b-90cc-78ebbc465626-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-lvvvj\" (UID: \"5109f31b-0e6e-447b-90cc-78ebbc465626\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lvvvj" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.364200 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5109f31b-0e6e-447b-90cc-78ebbc465626-serving-cert\") pod \"apiserver-7bbb656c7d-lvvvj\" (UID: \"5109f31b-0e6e-447b-90cc-78ebbc465626\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lvvvj" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.365680 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-9n5ph\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.365725 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0-config\") pod \"apiserver-76f77b778f-9wxc4\" (UID: \"42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0\") " pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.365742 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnf92\" (UniqueName: \"kubernetes.io/projected/42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0-kube-api-access-nnf92\") pod \"apiserver-76f77b778f-9wxc4\" (UID: \"42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0\") " pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.365761 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/84734edc-960c-4a16-9281-b10a1dc0a710-console-serving-cert\") pod \"console-f9d7485db-zww4k\" (UID: \"84734edc-960c-4a16-9281-b10a1dc0a710\") " pod="openshift-console/console-f9d7485db-zww4k" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.365784 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47dsd\" (UniqueName: \"kubernetes.io/projected/0b36dd0a-26c9-4d5f-ae02-aa432af223ad-kube-api-access-47dsd\") pod \"controller-manager-879f6c89f-7c9rp\" (UID: \"0b36dd0a-26c9-4d5f-ae02-aa432af223ad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7c9rp" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.365803 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0-node-pullsecrets\") pod \"apiserver-76f77b778f-9wxc4\" (UID: \"42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0\") " pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.365821 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5109f31b-0e6e-447b-90cc-78ebbc465626-etcd-client\") pod \"apiserver-7bbb656c7d-lvvvj\" (UID: \"5109f31b-0e6e-447b-90cc-78ebbc465626\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lvvvj" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.365850 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0b36dd0a-26c9-4d5f-ae02-aa432af223ad-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-7c9rp\" (UID: \"0b36dd0a-26c9-4d5f-ae02-aa432af223ad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7c9rp" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.365872 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-9n5ph\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.365888 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0-audit-dir\") pod \"apiserver-76f77b778f-9wxc4\" (UID: \"42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0\") " pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.365921 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t2dr\" (UniqueName: \"kubernetes.io/projected/84734edc-960c-4a16-9281-b10a1dc0a710-kube-api-access-2t2dr\") pod \"console-f9d7485db-zww4k\" (UID: \"84734edc-960c-4a16-9281-b10a1dc0a710\") " pod="openshift-console/console-f9d7485db-zww4k" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.365941 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24d4e737-2ce5-405b-ba6a-74310353dd54-config\") pod \"machine-approver-56656f9798-zq59f\" (UID: \"24d4e737-2ce5-405b-ba6a-74310353dd54\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zq59f" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.366050 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e918dfc-5224-43da-9b18-19939e269562-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-npxd5\" (UID: \"7e918dfc-5224-43da-9b18-19939e269562\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-npxd5" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.366068 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d6a3975-f77c-4e1d-bc4a-9f34708d2421-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-d75bl\" (UID: \"8d6a3975-f77c-4e1d-bc4a-9f34708d2421\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d75bl" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.366113 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/84734edc-960c-4a16-9281-b10a1dc0a710-console-config\") pod \"console-f9d7485db-zww4k\" (UID: \"84734edc-960c-4a16-9281-b10a1dc0a710\") " pod="openshift-console/console-f9d7485db-zww4k" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.366133 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-audit-dir\") pod \"oauth-openshift-558db77b4-9n5ph\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.366152 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-9n5ph\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.366191 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pmlw\" (UniqueName: \"kubernetes.io/projected/f4d39c6c-15e3-48a3-82be-2bc3703dbc7f-kube-api-access-5pmlw\") pod \"machine-api-operator-5694c8668f-ssvjj\" (UID: \"f4d39c6c-15e3-48a3-82be-2bc3703dbc7f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ssvjj" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.366211 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5109f31b-0e6e-447b-90cc-78ebbc465626-encryption-config\") pod \"apiserver-7bbb656c7d-lvvvj\" (UID: \"5109f31b-0e6e-447b-90cc-78ebbc465626\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lvvvj" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.366249 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-9n5ph\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.366268 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f4d39c6c-15e3-48a3-82be-2bc3703dbc7f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-ssvjj\" (UID: \"f4d39c6c-15e3-48a3-82be-2bc3703dbc7f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ssvjj" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.366305 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0-audit\") pod \"apiserver-76f77b778f-9wxc4\" (UID: \"42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0\") " pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.366323 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-9n5ph\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.366343 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b36dd0a-26c9-4d5f-ae02-aa432af223ad-client-ca\") pod \"controller-manager-879f6c89f-7c9rp\" (UID: \"0b36dd0a-26c9-4d5f-ae02-aa432af223ad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7c9rp" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.366372 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trn99\" (UniqueName: \"kubernetes.io/projected/7e918dfc-5224-43da-9b18-19939e269562-kube-api-access-trn99\") pod \"cluster-samples-operator-665b6dd947-npxd5\" (UID: \"7e918dfc-5224-43da-9b18-19939e269562\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-npxd5" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.366393 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-9n5ph\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.366412 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b330afef-9be2-4944-b014-0b6b2478316d-client-ca\") pod \"route-controller-manager-6576b87f9c-9t6d4\" (UID: \"b330afef-9be2-4944-b014-0b6b2478316d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9t6d4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.366456 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/84734edc-960c-4a16-9281-b10a1dc0a710-oauth-serving-cert\") pod \"console-f9d7485db-zww4k\" (UID: \"84734edc-960c-4a16-9281-b10a1dc0a710\") " pod="openshift-console/console-f9d7485db-zww4k" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.366544 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b36dd0a-26c9-4d5f-ae02-aa432af223ad-config\") pod \"controller-manager-879f6c89f-7c9rp\" (UID: \"0b36dd0a-26c9-4d5f-ae02-aa432af223ad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7c9rp" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.366592 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hdth\" (UniqueName: \"kubernetes.io/projected/b330afef-9be2-4944-b014-0b6b2478316d-kube-api-access-2hdth\") pod \"route-controller-manager-6576b87f9c-9t6d4\" (UID: \"b330afef-9be2-4944-b014-0b6b2478316d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9t6d4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.366612 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5109f31b-0e6e-447b-90cc-78ebbc465626-audit-policies\") pod \"apiserver-7bbb656c7d-lvvvj\" (UID: \"5109f31b-0e6e-447b-90cc-78ebbc465626\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lvvvj" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.366634 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0-etcd-serving-ca\") pod \"apiserver-76f77b778f-9wxc4\" (UID: \"42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0\") " pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.366673 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0-image-import-ca\") pod \"apiserver-76f77b778f-9wxc4\" (UID: \"42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0\") " pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.366703 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0-encryption-config\") pod \"apiserver-76f77b778f-9wxc4\" (UID: \"42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0\") " pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.366722 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9wxc4\" (UID: \"42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0\") " pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.366738 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-9n5ph\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.366766 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0-serving-cert\") pod \"apiserver-76f77b778f-9wxc4\" (UID: \"42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0\") " pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.366785 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4d39c6c-15e3-48a3-82be-2bc3703dbc7f-config\") pod \"machine-api-operator-5694c8668f-ssvjj\" (UID: \"f4d39c6c-15e3-48a3-82be-2bc3703dbc7f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ssvjj" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.366820 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d6a3975-f77c-4e1d-bc4a-9f34708d2421-config\") pod \"openshift-apiserver-operator-796bbdcf4f-d75bl\" (UID: \"8d6a3975-f77c-4e1d-bc4a-9f34708d2421\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d75bl" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.366840 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/24d4e737-2ce5-405b-ba6a-74310353dd54-auth-proxy-config\") pod \"machine-approver-56656f9798-zq59f\" (UID: \"24d4e737-2ce5-405b-ba6a-74310353dd54\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zq59f" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.366856 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-9n5ph\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.366885 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b330afef-9be2-4944-b014-0b6b2478316d-serving-cert\") pod \"route-controller-manager-6576b87f9c-9t6d4\" (UID: \"b330afef-9be2-4944-b014-0b6b2478316d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9t6d4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.366925 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf86t\" (UniqueName: \"kubernetes.io/projected/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-kube-api-access-gf86t\") pod \"oauth-openshift-558db77b4-9n5ph\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.366963 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/84734edc-960c-4a16-9281-b10a1dc0a710-console-oauth-config\") pod \"console-f9d7485db-zww4k\" (UID: \"84734edc-960c-4a16-9281-b10a1dc0a710\") " pod="openshift-console/console-f9d7485db-zww4k" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.369883 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-bhw7l"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.370192 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.370684 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/24d4e737-2ce5-405b-ba6a-74310353dd54-machine-approver-tls\") pod \"machine-approver-56656f9798-zq59f\" (UID: \"24d4e737-2ce5-405b-ba6a-74310353dd54\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zq59f" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.370743 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5109f31b-0e6e-447b-90cc-78ebbc465626-audit-dir\") pod \"apiserver-7bbb656c7d-lvvvj\" (UID: \"5109f31b-0e6e-447b-90cc-78ebbc465626\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lvvvj" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.370766 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84734edc-960c-4a16-9281-b10a1dc0a710-trusted-ca-bundle\") pod \"console-f9d7485db-zww4k\" (UID: \"84734edc-960c-4a16-9281-b10a1dc0a710\") " pod="openshift-console/console-f9d7485db-zww4k" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.377039 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bhw7l" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.377389 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.378704 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2cdt\" (UniqueName: \"kubernetes.io/projected/24d4e737-2ce5-405b-ba6a-74310353dd54-kube-api-access-w2cdt\") pod \"machine-approver-56656f9798-zq59f\" (UID: \"24d4e737-2ce5-405b-ba6a-74310353dd54\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zq59f" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.378746 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b330afef-9be2-4944-b014-0b6b2478316d-config\") pod \"route-controller-manager-6576b87f9c-9t6d4\" (UID: \"b330afef-9be2-4944-b014-0b6b2478316d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9t6d4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.378771 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0-etcd-client\") pod \"apiserver-76f77b778f-9wxc4\" (UID: \"42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0\") " pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.378789 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5109f31b-0e6e-447b-90cc-78ebbc465626-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-lvvvj\" (UID: \"5109f31b-0e6e-447b-90cc-78ebbc465626\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lvvvj" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.378809 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-9n5ph\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.378824 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b36dd0a-26c9-4d5f-ae02-aa432af223ad-serving-cert\") pod \"controller-manager-879f6c89f-7c9rp\" (UID: \"0b36dd0a-26c9-4d5f-ae02-aa432af223ad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7c9rp" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.381392 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-xvhj7"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.383682 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.384317 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.385859 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.386048 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.387074 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wd5ld"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.387477 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-npxd5"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.387494 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dtf2d"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.388019 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xvhj7" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.388293 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wd5ld" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.388866 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.388975 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hgpp5"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.389336 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hgpp5" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.389389 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dtf2d" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.390911 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mj6r9"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.392102 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mj6r9" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.392747 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8xcw"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.393282 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8xcw" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.394873 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-9dfzw"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.399138 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.403568 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9dfzw" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.405816 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-8nr74"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.409377 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v422r"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.409552 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-8nr74" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.409759 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v422r" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.413210 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lfg9z"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.413809 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lfg9z" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.418886 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.421139 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nd9bb"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.421733 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nd9bb" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.422832 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nxtrt"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.423201 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nxtrt" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.426325 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-nzlsw"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.426923 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-nzlsw" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.428022 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-qvvdf"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.428525 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-qvvdf" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.431468 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-l9845"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.432126 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-l9845" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.436376 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-jxr6v"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.436878 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jxr6v" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.471028 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.471903 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500875-tdl29"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.472766 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.473309 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9n5ph"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.473402 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500875-tdl29" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.477709 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.477956 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-zww4k"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.479512 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-9n5ph\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.479552 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0-audit-dir\") pod \"apiserver-76f77b778f-9wxc4\" (UID: \"42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0\") " pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.479608 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/84734edc-960c-4a16-9281-b10a1dc0a710-console-config\") pod \"console-f9d7485db-zww4k\" (UID: \"84734edc-960c-4a16-9281-b10a1dc0a710\") " pod="openshift-console/console-f9d7485db-zww4k" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.479640 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t2dr\" (UniqueName: \"kubernetes.io/projected/84734edc-960c-4a16-9281-b10a1dc0a710-kube-api-access-2t2dr\") pod \"console-f9d7485db-zww4k\" (UID: \"84734edc-960c-4a16-9281-b10a1dc0a710\") " pod="openshift-console/console-f9d7485db-zww4k" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.479675 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24d4e737-2ce5-405b-ba6a-74310353dd54-config\") pod \"machine-approver-56656f9798-zq59f\" (UID: \"24d4e737-2ce5-405b-ba6a-74310353dd54\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zq59f" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.479702 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e918dfc-5224-43da-9b18-19939e269562-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-npxd5\" (UID: \"7e918dfc-5224-43da-9b18-19939e269562\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-npxd5" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.479732 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d6a3975-f77c-4e1d-bc4a-9f34708d2421-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-d75bl\" (UID: \"8d6a3975-f77c-4e1d-bc4a-9f34708d2421\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d75bl" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.479764 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-audit-dir\") pod \"oauth-openshift-558db77b4-9n5ph\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.479794 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-9n5ph\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.479824 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pmlw\" (UniqueName: \"kubernetes.io/projected/f4d39c6c-15e3-48a3-82be-2bc3703dbc7f-kube-api-access-5pmlw\") pod \"machine-api-operator-5694c8668f-ssvjj\" (UID: \"f4d39c6c-15e3-48a3-82be-2bc3703dbc7f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ssvjj" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.479856 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5109f31b-0e6e-447b-90cc-78ebbc465626-encryption-config\") pod \"apiserver-7bbb656c7d-lvvvj\" (UID: \"5109f31b-0e6e-447b-90cc-78ebbc465626\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lvvvj" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.479892 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-9n5ph\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.479924 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f4d39c6c-15e3-48a3-82be-2bc3703dbc7f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-ssvjj\" (UID: \"f4d39c6c-15e3-48a3-82be-2bc3703dbc7f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ssvjj" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.479955 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0-audit\") pod \"apiserver-76f77b778f-9wxc4\" (UID: \"42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0\") " pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.480006 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-9n5ph\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.480040 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-9n5ph\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.480072 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b36dd0a-26c9-4d5f-ae02-aa432af223ad-client-ca\") pod \"controller-manager-879f6c89f-7c9rp\" (UID: \"0b36dd0a-26c9-4d5f-ae02-aa432af223ad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7c9rp" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.480107 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trn99\" (UniqueName: \"kubernetes.io/projected/7e918dfc-5224-43da-9b18-19939e269562-kube-api-access-trn99\") pod \"cluster-samples-operator-665b6dd947-npxd5\" (UID: \"7e918dfc-5224-43da-9b18-19939e269562\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-npxd5" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.480136 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b330afef-9be2-4944-b014-0b6b2478316d-client-ca\") pod \"route-controller-manager-6576b87f9c-9t6d4\" (UID: \"b330afef-9be2-4944-b014-0b6b2478316d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9t6d4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.480135 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-audit-dir\") pod \"oauth-openshift-558db77b4-9n5ph\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.480168 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/84734edc-960c-4a16-9281-b10a1dc0a710-oauth-serving-cert\") pod \"console-f9d7485db-zww4k\" (UID: \"84734edc-960c-4a16-9281-b10a1dc0a710\") " pod="openshift-console/console-f9d7485db-zww4k" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.480202 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b36dd0a-26c9-4d5f-ae02-aa432af223ad-config\") pod \"controller-manager-879f6c89f-7c9rp\" (UID: \"0b36dd0a-26c9-4d5f-ae02-aa432af223ad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7c9rp" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.480234 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hdth\" (UniqueName: \"kubernetes.io/projected/b330afef-9be2-4944-b014-0b6b2478316d-kube-api-access-2hdth\") pod \"route-controller-manager-6576b87f9c-9t6d4\" (UID: \"b330afef-9be2-4944-b014-0b6b2478316d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9t6d4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.480267 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5109f31b-0e6e-447b-90cc-78ebbc465626-audit-policies\") pod \"apiserver-7bbb656c7d-lvvvj\" (UID: \"5109f31b-0e6e-447b-90cc-78ebbc465626\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lvvvj" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.480294 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0-etcd-serving-ca\") pod \"apiserver-76f77b778f-9wxc4\" (UID: \"42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0\") " pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.480360 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0-image-import-ca\") pod \"apiserver-76f77b778f-9wxc4\" (UID: \"42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0\") " pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.480388 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0-encryption-config\") pod \"apiserver-76f77b778f-9wxc4\" (UID: \"42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0\") " pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.480415 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0-serving-cert\") pod \"apiserver-76f77b778f-9wxc4\" (UID: \"42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0\") " pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.480438 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9wxc4\" (UID: \"42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0\") " pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.480467 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-9n5ph\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.480499 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-9n5ph\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.480531 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4d39c6c-15e3-48a3-82be-2bc3703dbc7f-config\") pod \"machine-api-operator-5694c8668f-ssvjj\" (UID: \"f4d39c6c-15e3-48a3-82be-2bc3703dbc7f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ssvjj" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.480559 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d6a3975-f77c-4e1d-bc4a-9f34708d2421-config\") pod \"openshift-apiserver-operator-796bbdcf4f-d75bl\" (UID: \"8d6a3975-f77c-4e1d-bc4a-9f34708d2421\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d75bl" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.480585 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/24d4e737-2ce5-405b-ba6a-74310353dd54-auth-proxy-config\") pod \"machine-approver-56656f9798-zq59f\" (UID: \"24d4e737-2ce5-405b-ba6a-74310353dd54\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zq59f" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.480616 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b330afef-9be2-4944-b014-0b6b2478316d-serving-cert\") pod \"route-controller-manager-6576b87f9c-9t6d4\" (UID: \"b330afef-9be2-4944-b014-0b6b2478316d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9t6d4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.480661 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf86t\" (UniqueName: \"kubernetes.io/projected/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-kube-api-access-gf86t\") pod \"oauth-openshift-558db77b4-9n5ph\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.480693 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/84734edc-960c-4a16-9281-b10a1dc0a710-console-oauth-config\") pod \"console-f9d7485db-zww4k\" (UID: \"84734edc-960c-4a16-9281-b10a1dc0a710\") " pod="openshift-console/console-f9d7485db-zww4k" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.480723 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/24d4e737-2ce5-405b-ba6a-74310353dd54-machine-approver-tls\") pod \"machine-approver-56656f9798-zq59f\" (UID: \"24d4e737-2ce5-405b-ba6a-74310353dd54\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zq59f" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.480753 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5109f31b-0e6e-447b-90cc-78ebbc465626-audit-dir\") pod \"apiserver-7bbb656c7d-lvvvj\" (UID: \"5109f31b-0e6e-447b-90cc-78ebbc465626\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lvvvj" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.480783 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84734edc-960c-4a16-9281-b10a1dc0a710-trusted-ca-bundle\") pod \"console-f9d7485db-zww4k\" (UID: \"84734edc-960c-4a16-9281-b10a1dc0a710\") " pod="openshift-console/console-f9d7485db-zww4k" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.480805 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-9n5ph\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.480811 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2cdt\" (UniqueName: \"kubernetes.io/projected/24d4e737-2ce5-405b-ba6a-74310353dd54-kube-api-access-w2cdt\") pod \"machine-approver-56656f9798-zq59f\" (UID: \"24d4e737-2ce5-405b-ba6a-74310353dd54\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zq59f" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.480841 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b330afef-9be2-4944-b014-0b6b2478316d-config\") pod \"route-controller-manager-6576b87f9c-9t6d4\" (UID: \"b330afef-9be2-4944-b014-0b6b2478316d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9t6d4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.480854 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0-audit-dir\") pod \"apiserver-76f77b778f-9wxc4\" (UID: \"42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0\") " pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.480871 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0-etcd-client\") pod \"apiserver-76f77b778f-9wxc4\" (UID: \"42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0\") " pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.480900 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5109f31b-0e6e-447b-90cc-78ebbc465626-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-lvvvj\" (UID: \"5109f31b-0e6e-447b-90cc-78ebbc465626\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lvvvj" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.480927 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-9n5ph\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.480953 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b36dd0a-26c9-4d5f-ae02-aa432af223ad-serving-cert\") pod \"controller-manager-879f6c89f-7c9rp\" (UID: \"0b36dd0a-26c9-4d5f-ae02-aa432af223ad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7c9rp" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.481061 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-audit-policies\") pod \"oauth-openshift-558db77b4-9n5ph\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.481087 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-9n5ph\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.481126 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f4d39c6c-15e3-48a3-82be-2bc3703dbc7f-images\") pod \"machine-api-operator-5694c8668f-ssvjj\" (UID: \"f4d39c6c-15e3-48a3-82be-2bc3703dbc7f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ssvjj" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.481154 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-9n5ph\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.481189 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sspcj\" (UniqueName: \"kubernetes.io/projected/8d6a3975-f77c-4e1d-bc4a-9f34708d2421-kube-api-access-sspcj\") pod \"openshift-apiserver-operator-796bbdcf4f-d75bl\" (UID: \"8d6a3975-f77c-4e1d-bc4a-9f34708d2421\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d75bl" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.481220 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/84734edc-960c-4a16-9281-b10a1dc0a710-service-ca\") pod \"console-f9d7485db-zww4k\" (UID: \"84734edc-960c-4a16-9281-b10a1dc0a710\") " pod="openshift-console/console-f9d7485db-zww4k" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.481268 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l499v\" (UniqueName: \"kubernetes.io/projected/5109f31b-0e6e-447b-90cc-78ebbc465626-kube-api-access-l499v\") pod \"apiserver-7bbb656c7d-lvvvj\" (UID: \"5109f31b-0e6e-447b-90cc-78ebbc465626\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lvvvj" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.481311 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5109f31b-0e6e-447b-90cc-78ebbc465626-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-lvvvj\" (UID: \"5109f31b-0e6e-447b-90cc-78ebbc465626\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lvvvj" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.481345 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5109f31b-0e6e-447b-90cc-78ebbc465626-serving-cert\") pod \"apiserver-7bbb656c7d-lvvvj\" (UID: \"5109f31b-0e6e-447b-90cc-78ebbc465626\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lvvvj" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.481375 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-9n5ph\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.481400 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0-config\") pod \"apiserver-76f77b778f-9wxc4\" (UID: \"42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0\") " pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.481428 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnf92\" (UniqueName: \"kubernetes.io/projected/42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0-kube-api-access-nnf92\") pod \"apiserver-76f77b778f-9wxc4\" (UID: \"42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0\") " pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.481457 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/84734edc-960c-4a16-9281-b10a1dc0a710-console-serving-cert\") pod \"console-f9d7485db-zww4k\" (UID: \"84734edc-960c-4a16-9281-b10a1dc0a710\") " pod="openshift-console/console-f9d7485db-zww4k" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.481468 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/84734edc-960c-4a16-9281-b10a1dc0a710-console-config\") pod \"console-f9d7485db-zww4k\" (UID: \"84734edc-960c-4a16-9281-b10a1dc0a710\") " pod="openshift-console/console-f9d7485db-zww4k" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.481487 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47dsd\" (UniqueName: \"kubernetes.io/projected/0b36dd0a-26c9-4d5f-ae02-aa432af223ad-kube-api-access-47dsd\") pod \"controller-manager-879f6c89f-7c9rp\" (UID: \"0b36dd0a-26c9-4d5f-ae02-aa432af223ad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7c9rp" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.481543 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0-node-pullsecrets\") pod \"apiserver-76f77b778f-9wxc4\" (UID: \"42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0\") " pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.481571 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5109f31b-0e6e-447b-90cc-78ebbc465626-etcd-client\") pod \"apiserver-7bbb656c7d-lvvvj\" (UID: \"5109f31b-0e6e-447b-90cc-78ebbc465626\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lvvvj" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.481613 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0b36dd0a-26c9-4d5f-ae02-aa432af223ad-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-7c9rp\" (UID: \"0b36dd0a-26c9-4d5f-ae02-aa432af223ad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7c9rp" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.481915 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24d4e737-2ce5-405b-ba6a-74310353dd54-config\") pod \"machine-approver-56656f9798-zq59f\" (UID: \"24d4e737-2ce5-405b-ba6a-74310353dd54\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zq59f" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.483134 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-9n5ph\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.483512 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84734edc-960c-4a16-9281-b10a1dc0a710-trusted-ca-bundle\") pod \"console-f9d7485db-zww4k\" (UID: \"84734edc-960c-4a16-9281-b10a1dc0a710\") " pod="openshift-console/console-f9d7485db-zww4k" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.483601 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-audit-policies\") pod \"oauth-openshift-558db77b4-9n5ph\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.483822 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-9n5ph\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.484244 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/84734edc-960c-4a16-9281-b10a1dc0a710-service-ca\") pod \"console-f9d7485db-zww4k\" (UID: \"84734edc-960c-4a16-9281-b10a1dc0a710\") " pod="openshift-console/console-f9d7485db-zww4k" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.484297 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-sscgm"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.484554 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0-node-pullsecrets\") pod \"apiserver-76f77b778f-9wxc4\" (UID: \"42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0\") " pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.484879 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f4d39c6c-15e3-48a3-82be-2bc3703dbc7f-images\") pod \"machine-api-operator-5694c8668f-ssvjj\" (UID: \"f4d39c6c-15e3-48a3-82be-2bc3703dbc7f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ssvjj" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.484905 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b330afef-9be2-4944-b014-0b6b2478316d-config\") pod \"route-controller-manager-6576b87f9c-9t6d4\" (UID: \"b330afef-9be2-4944-b014-0b6b2478316d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9t6d4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.485500 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0b36dd0a-26c9-4d5f-ae02-aa432af223ad-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-7c9rp\" (UID: \"0b36dd0a-26c9-4d5f-ae02-aa432af223ad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7c9rp" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.485536 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9wxc4\" (UID: \"42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0\") " pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.485597 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0-config\") pod \"apiserver-76f77b778f-9wxc4\" (UID: \"42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0\") " pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.490267 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5109f31b-0e6e-447b-90cc-78ebbc465626-serving-cert\") pod \"apiserver-7bbb656c7d-lvvvj\" (UID: \"5109f31b-0e6e-447b-90cc-78ebbc465626\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lvvvj" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.490316 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5109f31b-0e6e-447b-90cc-78ebbc465626-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-lvvvj\" (UID: \"5109f31b-0e6e-447b-90cc-78ebbc465626\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lvvvj" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.490520 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5109f31b-0e6e-447b-90cc-78ebbc465626-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-lvvvj\" (UID: \"5109f31b-0e6e-447b-90cc-78ebbc465626\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lvvvj" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.490869 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0-audit\") pod \"apiserver-76f77b778f-9wxc4\" (UID: \"42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0\") " pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.491358 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/24d4e737-2ce5-405b-ba6a-74310353dd54-auth-proxy-config\") pod \"machine-approver-56656f9798-zq59f\" (UID: \"24d4e737-2ce5-405b-ba6a-74310353dd54\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zq59f" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.491622 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4d39c6c-15e3-48a3-82be-2bc3703dbc7f-config\") pod \"machine-api-operator-5694c8668f-ssvjj\" (UID: \"f4d39c6c-15e3-48a3-82be-2bc3703dbc7f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ssvjj" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.492441 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7e918dfc-5224-43da-9b18-19939e269562-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-npxd5\" (UID: \"7e918dfc-5224-43da-9b18-19939e269562\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-npxd5" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.492504 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-9n5ph\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.492785 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.493012 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-9n5ph\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.493090 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d6a3975-f77c-4e1d-bc4a-9f34708d2421-config\") pod \"openshift-apiserver-operator-796bbdcf4f-d75bl\" (UID: \"8d6a3975-f77c-4e1d-bc4a-9f34708d2421\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d75bl" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.493853 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rph8"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.494319 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b36dd0a-26c9-4d5f-ae02-aa432af223ad-serving-cert\") pod \"controller-manager-879f6c89f-7c9rp\" (UID: \"0b36dd0a-26c9-4d5f-ae02-aa432af223ad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7c9rp" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.494537 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b36dd0a-26c9-4d5f-ae02-aa432af223ad-client-ca\") pod \"controller-manager-879f6c89f-7c9rp\" (UID: \"0b36dd0a-26c9-4d5f-ae02-aa432af223ad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7c9rp" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.494825 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b36dd0a-26c9-4d5f-ae02-aa432af223ad-config\") pod \"controller-manager-879f6c89f-7c9rp\" (UID: \"0b36dd0a-26c9-4d5f-ae02-aa432af223ad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7c9rp" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.494885 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f4d39c6c-15e3-48a3-82be-2bc3703dbc7f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-ssvjj\" (UID: \"f4d39c6c-15e3-48a3-82be-2bc3703dbc7f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ssvjj" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.495239 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0-etcd-serving-ca\") pod \"apiserver-76f77b778f-9wxc4\" (UID: \"42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0\") " pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.495472 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b330afef-9be2-4944-b014-0b6b2478316d-client-ca\") pod \"route-controller-manager-6576b87f9c-9t6d4\" (UID: \"b330afef-9be2-4944-b014-0b6b2478316d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9t6d4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.495707 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5109f31b-0e6e-447b-90cc-78ebbc465626-audit-dir\") pod \"apiserver-7bbb656c7d-lvvvj\" (UID: \"5109f31b-0e6e-447b-90cc-78ebbc465626\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lvvvj" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.495764 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0-image-import-ca\") pod \"apiserver-76f77b778f-9wxc4\" (UID: \"42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0\") " pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.496093 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/84734edc-960c-4a16-9281-b10a1dc0a710-oauth-serving-cert\") pod \"console-f9d7485db-zww4k\" (UID: \"84734edc-960c-4a16-9281-b10a1dc0a710\") " pod="openshift-console/console-f9d7485db-zww4k" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.496150 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-ffh76"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.496239 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-9n5ph\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.498016 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-9n5ph\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.498213 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-9n5ph\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.498282 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.498409 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5109f31b-0e6e-447b-90cc-78ebbc465626-audit-policies\") pod \"apiserver-7bbb656c7d-lvvvj\" (UID: \"5109f31b-0e6e-447b-90cc-78ebbc465626\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lvvvj" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.498562 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/24d4e737-2ce5-405b-ba6a-74310353dd54-machine-approver-tls\") pod \"machine-approver-56656f9798-zq59f\" (UID: \"24d4e737-2ce5-405b-ba6a-74310353dd54\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zq59f" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.498701 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wd5ld"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.499471 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0-serving-cert\") pod \"apiserver-76f77b778f-9wxc4\" (UID: \"42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0\") " pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.500295 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-9n5ph\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.500346 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2sv9"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.500633 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-rnrz8"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.501109 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0-encryption-config\") pod \"apiserver-76f77b778f-9wxc4\" (UID: \"42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0\") " pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.501144 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-9n5ph\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.503157 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-9dfzw"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.503453 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lfg9z"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.503936 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-jw86f"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.505196 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0-etcd-client\") pod \"apiserver-76f77b778f-9wxc4\" (UID: \"42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0\") " pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.505287 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5109f31b-0e6e-447b-90cc-78ebbc465626-etcd-client\") pod \"apiserver-7bbb656c7d-lvvvj\" (UID: \"5109f31b-0e6e-447b-90cc-78ebbc465626\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lvvvj" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.505331 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nd9bb"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.505762 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-9n5ph\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.505869 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d6a3975-f77c-4e1d-bc4a-9f34708d2421-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-d75bl\" (UID: \"8d6a3975-f77c-4e1d-bc4a-9f34708d2421\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d75bl" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.506252 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6l265"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.506433 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b330afef-9be2-4944-b014-0b6b2478316d-serving-cert\") pod \"route-controller-manager-6576b87f9c-9t6d4\" (UID: \"b330afef-9be2-4944-b014-0b6b2478316d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9t6d4" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.506990 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/84734edc-960c-4a16-9281-b10a1dc0a710-console-serving-cert\") pod \"console-f9d7485db-zww4k\" (UID: \"84734edc-960c-4a16-9281-b10a1dc0a710\") " pod="openshift-console/console-f9d7485db-zww4k" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.507216 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-45snr"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.508243 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-bhw7l"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.508697 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5109f31b-0e6e-447b-90cc-78ebbc465626-encryption-config\") pod \"apiserver-7bbb656c7d-lvvvj\" (UID: \"5109f31b-0e6e-447b-90cc-78ebbc465626\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lvvvj" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.509327 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mj6r9"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.510287 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-q5rqg"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.511505 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-wf65d"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.511666 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-q5rqg" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.512075 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-wf65d" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.512252 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8xcw"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.513301 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9n9ld"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.513915 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/84734edc-960c-4a16-9281-b10a1dc0a710-console-oauth-config\") pod \"console-f9d7485db-zww4k\" (UID: \"84734edc-960c-4a16-9281-b10a1dc0a710\") " pod="openshift-console/console-f9d7485db-zww4k" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.514324 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-nzlsw"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.515280 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hgpp5"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.516254 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-xvhj7"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.517282 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-8nr74"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.517407 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.518254 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-jxr6v"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.519458 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nxtrt"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.520232 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dtf2d"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.521238 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v422r"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.522231 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-q5rqg"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.523290 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j5zlt"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.524289 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-qvvdf"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.525273 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500875-tdl29"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.526233 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-wf65d"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.527173 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xr77f"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.528517 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xr77f"] Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.528727 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-xr77f" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.537372 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.557769 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.578634 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.597477 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.618006 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.637085 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.657893 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.678149 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.697842 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.718828 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.738555 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.758438 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.778767 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.797765 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.818249 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.838362 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.868841 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.878464 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.900183 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.919274 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.938837 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.958598 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.978677 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 02 17:17:19 crc kubenswrapper[4858]: I0202 17:17:19.999834 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.017693 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.038504 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.058030 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.078757 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.099209 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.118179 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.138619 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.157859 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.177830 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.197807 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.217709 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.238429 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.258408 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.278586 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.298277 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.318400 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.338185 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.378857 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.400656 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.416016 4858 request.go:700] Waited for 1.012129304s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.418134 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.439019 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.459233 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.478739 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.518307 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.538389 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.557857 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.578147 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.597848 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.624798 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.638284 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.658148 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.678800 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.701580 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.719435 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.739241 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.758863 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.778683 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.799185 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.819646 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.838404 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.859189 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.879142 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.904738 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.918462 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.938898 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.959123 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.978494 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 02 17:17:20 crc kubenswrapper[4858]: I0202 17:17:20.998556 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.018630 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.038962 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.059071 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.078576 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.098248 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.156332 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2cdt\" (UniqueName: \"kubernetes.io/projected/24d4e737-2ce5-405b-ba6a-74310353dd54-kube-api-access-w2cdt\") pod \"machine-approver-56656f9798-zq59f\" (UID: \"24d4e737-2ce5-405b-ba6a-74310353dd54\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zq59f" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.167593 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sspcj\" (UniqueName: \"kubernetes.io/projected/8d6a3975-f77c-4e1d-bc4a-9f34708d2421-kube-api-access-sspcj\") pod \"openshift-apiserver-operator-796bbdcf4f-d75bl\" (UID: \"8d6a3975-f77c-4e1d-bc4a-9f34708d2421\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d75bl" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.170143 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zq59f" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.189917 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t2dr\" (UniqueName: \"kubernetes.io/projected/84734edc-960c-4a16-9281-b10a1dc0a710-kube-api-access-2t2dr\") pod \"console-f9d7485db-zww4k\" (UID: \"84734edc-960c-4a16-9281-b10a1dc0a710\") " pod="openshift-console/console-f9d7485db-zww4k" Feb 02 17:17:21 crc kubenswrapper[4858]: W0202 17:17:21.197132 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24d4e737_2ce5_405b_ba6a_74310353dd54.slice/crio-19b3abec6d5f2e58ea58914a12b60d6f174fa28a15add658829e0f32f855df11 WatchSource:0}: Error finding container 19b3abec6d5f2e58ea58914a12b60d6f174fa28a15add658829e0f32f855df11: Status 404 returned error can't find the container with id 19b3abec6d5f2e58ea58914a12b60d6f174fa28a15add658829e0f32f855df11 Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.203656 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d75bl" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.208609 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pmlw\" (UniqueName: \"kubernetes.io/projected/f4d39c6c-15e3-48a3-82be-2bc3703dbc7f-kube-api-access-5pmlw\") pod \"machine-api-operator-5694c8668f-ssvjj\" (UID: \"f4d39c6c-15e3-48a3-82be-2bc3703dbc7f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ssvjj" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.216470 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zww4k" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.219874 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnf92\" (UniqueName: \"kubernetes.io/projected/42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0-kube-api-access-nnf92\") pod \"apiserver-76f77b778f-9wxc4\" (UID: \"42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0\") " pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.251384 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l499v\" (UniqueName: \"kubernetes.io/projected/5109f31b-0e6e-447b-90cc-78ebbc465626-kube-api-access-l499v\") pod \"apiserver-7bbb656c7d-lvvvj\" (UID: \"5109f31b-0e6e-447b-90cc-78ebbc465626\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lvvvj" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.260442 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47dsd\" (UniqueName: \"kubernetes.io/projected/0b36dd0a-26c9-4d5f-ae02-aa432af223ad-kube-api-access-47dsd\") pod \"controller-manager-879f6c89f-7c9rp\" (UID: \"0b36dd0a-26c9-4d5f-ae02-aa432af223ad\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7c9rp" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.275398 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trn99\" (UniqueName: \"kubernetes.io/projected/7e918dfc-5224-43da-9b18-19939e269562-kube-api-access-trn99\") pod \"cluster-samples-operator-665b6dd947-npxd5\" (UID: \"7e918dfc-5224-43da-9b18-19939e269562\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-npxd5" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.296005 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf86t\" (UniqueName: \"kubernetes.io/projected/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-kube-api-access-gf86t\") pod \"oauth-openshift-558db77b4-9n5ph\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.315259 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hdth\" (UniqueName: \"kubernetes.io/projected/b330afef-9be2-4944-b014-0b6b2478316d-kube-api-access-2hdth\") pod \"route-controller-manager-6576b87f9c-9t6d4\" (UID: \"b330afef-9be2-4944-b014-0b6b2478316d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9t6d4" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.318536 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.338121 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.359368 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.361008 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-7c9rp" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.378334 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.394352 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.397882 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.410746 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-ssvjj" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.416453 4858 request.go:700] Waited for 1.904167728s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.417929 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.421836 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lvvvj" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.440441 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d75bl"] Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.440623 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.457181 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9t6d4" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.459948 4858 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.464725 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-zww4k"] Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.477500 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 02 17:17:21 crc kubenswrapper[4858]: W0202 17:17:21.478346 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84734edc_960c_4a16_9281_b10a1dc0a710.slice/crio-a4ad7a5ed785515bc25b2b1e07c695b3275a1885dbd834ffbb50bfce914d62dd WatchSource:0}: Error finding container a4ad7a5ed785515bc25b2b1e07c695b3275a1885dbd834ffbb50bfce914d62dd: Status 404 returned error can't find the container with id a4ad7a5ed785515bc25b2b1e07c695b3275a1885dbd834ffbb50bfce914d62dd Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.497811 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.504256 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.523362 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-npxd5" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.610642 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c022725c-9725-4d5c-a703-5d61c931d9e8-ca-trust-extracted\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.610701 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6jg6\" (UniqueName: \"kubernetes.io/projected/c022725c-9725-4d5c-a703-5d61c931d9e8-kube-api-access-c6jg6\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.610727 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/60962908-fe41-4333-80b3-ed8bbc9c4fcb-available-featuregates\") pod \"openshift-config-operator-7777fb866f-45snr\" (UID: \"60962908-fe41-4333-80b3-ed8bbc9c4fcb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-45snr" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.610750 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75b2f0db-60c5-4143-a42d-e62ee5599503-config\") pod \"kube-controller-manager-operator-78b949d7b-4rph8\" (UID: \"75b2f0db-60c5-4143-a42d-e62ee5599503\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rph8" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.610804 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb9mj\" (UniqueName: \"kubernetes.io/projected/ac750cb2-c0b0-4044-82c1-41c0a46748e6-kube-api-access-qb9mj\") pod \"multus-admission-controller-857f4d67dd-8nr74\" (UID: \"ac750cb2-c0b0-4044-82c1-41c0a46748e6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8nr74" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.610829 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h85qc\" (UniqueName: \"kubernetes.io/projected/91c97885-896e-4947-907f-acb0a86ce947-kube-api-access-h85qc\") pod \"package-server-manager-789f6589d5-mj6r9\" (UID: \"91c97885-896e-4947-907f-acb0a86ce947\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mj6r9" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.610852 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c022725c-9725-4d5c-a703-5d61c931d9e8-registry-certificates\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.610889 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfjsz\" (UniqueName: \"kubernetes.io/projected/5b905a5a-09b5-4cce-a38d-34a92e704c0b-kube-api-access-mfjsz\") pod \"openshift-controller-manager-operator-756b6f6bc6-6l265\" (UID: \"5b905a5a-09b5-4cce-a38d-34a92e704c0b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6l265" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.610914 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65gng\" (UniqueName: \"kubernetes.io/projected/e330b41c-dacd-4c4b-a013-dd16a913ac54-kube-api-access-65gng\") pod \"control-plane-machine-set-operator-78cbb6b69f-hgpp5\" (UID: \"e330b41c-dacd-4c4b-a013-dd16a913ac54\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hgpp5" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.610948 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.611002 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e330b41c-dacd-4c4b-a013-dd16a913ac54-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-hgpp5\" (UID: \"e330b41c-dacd-4c4b-a013-dd16a913ac54\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hgpp5" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.611083 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfxpk\" (UniqueName: \"kubernetes.io/projected/dfcd37a1-ff8a-4ed9-9100-3b9b0e1b5c24-kube-api-access-vfxpk\") pod \"cluster-image-registry-operator-dc59b4c8b-9n9ld\" (UID: \"dfcd37a1-ff8a-4ed9-9100-3b9b0e1b5c24\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9n9ld" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.611109 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4af8047b-d906-4458-84e9-4cbefe269b59-metrics-certs\") pod \"router-default-5444994796-frw2d\" (UID: \"4af8047b-d906-4458-84e9-4cbefe269b59\") " pod="openshift-ingress/router-default-5444994796-frw2d" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.611190 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfnl7\" (UniqueName: \"kubernetes.io/projected/60962908-fe41-4333-80b3-ed8bbc9c4fcb-kube-api-access-sfnl7\") pod \"openshift-config-operator-7777fb866f-45snr\" (UID: \"60962908-fe41-4333-80b3-ed8bbc9c4fcb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-45snr" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.611262 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e909476f-4cf5-4240-a8bc-51ed96ff8fee-etcd-client\") pod \"etcd-operator-b45778765-sscgm\" (UID: \"e909476f-4cf5-4240-a8bc-51ed96ff8fee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sscgm" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.611333 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60962908-fe41-4333-80b3-ed8bbc9c4fcb-serving-cert\") pod \"openshift-config-operator-7777fb866f-45snr\" (UID: \"60962908-fe41-4333-80b3-ed8bbc9c4fcb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-45snr" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.611351 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6b927555-7584-46c4-ba20-4899aa734e9e-srv-cert\") pod \"olm-operator-6b444d44fb-v422r\" (UID: \"6b927555-7584-46c4-ba20-4899aa734e9e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v422r" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.611417 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ac750cb2-c0b0-4044-82c1-41c0a46748e6-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-8nr74\" (UID: \"ac750cb2-c0b0-4044-82c1-41c0a46748e6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8nr74" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.611437 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4af8047b-d906-4458-84e9-4cbefe269b59-default-certificate\") pod \"router-default-5444994796-frw2d\" (UID: \"4af8047b-d906-4458-84e9-4cbefe269b59\") " pod="openshift-ingress/router-default-5444994796-frw2d" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.611491 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xchr6\" (UniqueName: \"kubernetes.io/projected/d8edb432-a713-4b13-a42d-510636c08f81-kube-api-access-xchr6\") pod \"console-operator-58897d9998-rnrz8\" (UID: \"d8edb432-a713-4b13-a42d-510636c08f81\") " pod="openshift-console-operator/console-operator-58897d9998-rnrz8" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.611549 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b68e9688-43c8-4f2e-aafa-8b634249fa5e-auth-proxy-config\") pod \"machine-config-operator-74547568cd-xvhj7\" (UID: \"b68e9688-43c8-4f2e-aafa-8b634249fa5e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xvhj7" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.611702 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/21a5ff9b-0143-4513-8139-84b24e0854af-profile-collector-cert\") pod \"catalog-operator-68c6474976-g8xcw\" (UID: \"21a5ff9b-0143-4513-8139-84b24e0854af\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8xcw" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.611788 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/91c97885-896e-4947-907f-acb0a86ce947-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-mj6r9\" (UID: \"91c97885-896e-4947-907f-acb0a86ce947\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mj6r9" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.611815 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3fd108fb-dfcf-4826-a2e0-8e4877e90f0a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-dtf2d\" (UID: \"3fd108fb-dfcf-4826-a2e0-8e4877e90f0a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dtf2d" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.611834 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e909476f-4cf5-4240-a8bc-51ed96ff8fee-etcd-service-ca\") pod \"etcd-operator-b45778765-sscgm\" (UID: \"e909476f-4cf5-4240-a8bc-51ed96ff8fee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sscgm" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.611862 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttxcc\" (UniqueName: \"kubernetes.io/projected/d04acd9b-9b86-4119-b327-a3ee6f2690da-kube-api-access-ttxcc\") pod \"dns-operator-744455d44c-jw86f\" (UID: \"d04acd9b-9b86-4119-b327-a3ee6f2690da\") " pod="openshift-dns-operator/dns-operator-744455d44c-jw86f" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.611920 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjw7v\" (UniqueName: \"kubernetes.io/projected/b68e9688-43c8-4f2e-aafa-8b634249fa5e-kube-api-access-vjw7v\") pod \"machine-config-operator-74547568cd-xvhj7\" (UID: \"b68e9688-43c8-4f2e-aafa-8b634249fa5e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xvhj7" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.611942 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvjhd\" (UniqueName: \"kubernetes.io/projected/77955d8c-4a9c-4484-ad86-dbe070fb4451-kube-api-access-jvjhd\") pod \"migrator-59844c95c7-9dfzw\" (UID: \"77955d8c-4a9c-4484-ad86-dbe070fb4451\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9dfzw" Feb 02 17:17:21 crc kubenswrapper[4858]: E0202 17:17:21.611968 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:22.111954278 +0000 UTC m=+143.264369633 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.612081 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8edb432-a713-4b13-a42d-510636c08f81-serving-cert\") pod \"console-operator-58897d9998-rnrz8\" (UID: \"d8edb432-a713-4b13-a42d-510636c08f81\") " pod="openshift-console-operator/console-operator-58897d9998-rnrz8" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.612126 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/21a5ff9b-0143-4513-8139-84b24e0854af-srv-cert\") pod \"catalog-operator-68c6474976-g8xcw\" (UID: \"21a5ff9b-0143-4513-8139-84b24e0854af\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8xcw" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.612259 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c9d8c712-038d-47f2-931a-e6cf3af58665-trusted-ca\") pod \"ingress-operator-5b745b69d9-bhw7l\" (UID: \"c9d8c712-038d-47f2-931a-e6cf3af58665\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bhw7l" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.612288 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fd108fb-dfcf-4826-a2e0-8e4877e90f0a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-dtf2d\" (UID: \"3fd108fb-dfcf-4826-a2e0-8e4877e90f0a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dtf2d" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.612352 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75b2f0db-60c5-4143-a42d-e62ee5599503-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4rph8\" (UID: \"75b2f0db-60c5-4143-a42d-e62ee5599503\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rph8" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.612399 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c022725c-9725-4d5c-a703-5d61c931d9e8-trusted-ca\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.612446 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e909476f-4cf5-4240-a8bc-51ed96ff8fee-serving-cert\") pod \"etcd-operator-b45778765-sscgm\" (UID: \"e909476f-4cf5-4240-a8bc-51ed96ff8fee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sscgm" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.612468 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b68e9688-43c8-4f2e-aafa-8b634249fa5e-proxy-tls\") pod \"machine-config-operator-74547568cd-xvhj7\" (UID: \"b68e9688-43c8-4f2e-aafa-8b634249fa5e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xvhj7" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.612536 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b68e9688-43c8-4f2e-aafa-8b634249fa5e-images\") pod \"machine-config-operator-74547568cd-xvhj7\" (UID: \"b68e9688-43c8-4f2e-aafa-8b634249fa5e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xvhj7" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.612609 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b905a5a-09b5-4cce-a38d-34a92e704c0b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6l265\" (UID: \"5b905a5a-09b5-4cce-a38d-34a92e704c0b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6l265" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.612667 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65g4l\" (UniqueName: \"kubernetes.io/projected/21a5ff9b-0143-4513-8139-84b24e0854af-kube-api-access-65g4l\") pod \"catalog-operator-68c6474976-g8xcw\" (UID: \"21a5ff9b-0143-4513-8139-84b24e0854af\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8xcw" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.612694 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c9d8c712-038d-47f2-931a-e6cf3af58665-bound-sa-token\") pod \"ingress-operator-5b745b69d9-bhw7l\" (UID: \"c9d8c712-038d-47f2-931a-e6cf3af58665\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bhw7l" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.612746 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pdjs\" (UniqueName: \"kubernetes.io/projected/3fd108fb-dfcf-4826-a2e0-8e4877e90f0a-kube-api-access-6pdjs\") pod \"kube-storage-version-migrator-operator-b67b599dd-dtf2d\" (UID: \"3fd108fb-dfcf-4826-a2e0-8e4877e90f0a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dtf2d" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.612768 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c9d8c712-038d-47f2-931a-e6cf3af58665-metrics-tls\") pod \"ingress-operator-5b745b69d9-bhw7l\" (UID: \"c9d8c712-038d-47f2-931a-e6cf3af58665\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bhw7l" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.612789 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8edb432-a713-4b13-a42d-510636c08f81-config\") pod \"console-operator-58897d9998-rnrz8\" (UID: \"d8edb432-a713-4b13-a42d-510636c08f81\") " pod="openshift-console-operator/console-operator-58897d9998-rnrz8" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.612806 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/00d93965-f44b-4003-8093-13f33936021e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wd5ld\" (UID: \"00d93965-f44b-4003-8093-13f33936021e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wd5ld" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.612874 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d04acd9b-9b86-4119-b327-a3ee6f2690da-metrics-tls\") pod \"dns-operator-744455d44c-jw86f\" (UID: \"d04acd9b-9b86-4119-b327-a3ee6f2690da\") " pod="openshift-dns-operator/dns-operator-744455d44c-jw86f" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.613394 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d57s\" (UniqueName: \"kubernetes.io/projected/4af8047b-d906-4458-84e9-4cbefe269b59-kube-api-access-7d57s\") pod \"router-default-5444994796-frw2d\" (UID: \"4af8047b-d906-4458-84e9-4cbefe269b59\") " pod="openshift-ingress/router-default-5444994796-frw2d" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.613434 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e909476f-4cf5-4240-a8bc-51ed96ff8fee-etcd-ca\") pod \"etcd-operator-b45778765-sscgm\" (UID: \"e909476f-4cf5-4240-a8bc-51ed96ff8fee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sscgm" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.613453 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4af8047b-d906-4458-84e9-4cbefe269b59-service-ca-bundle\") pod \"router-default-5444994796-frw2d\" (UID: \"4af8047b-d906-4458-84e9-4cbefe269b59\") " pod="openshift-ingress/router-default-5444994796-frw2d" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.613475 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8edb432-a713-4b13-a42d-510636c08f81-trusted-ca\") pod \"console-operator-58897d9998-rnrz8\" (UID: \"d8edb432-a713-4b13-a42d-510636c08f81\") " pod="openshift-console-operator/console-operator-58897d9998-rnrz8" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.613492 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpwxz\" (UniqueName: \"kubernetes.io/projected/fd740a32-9003-4d27-8c7b-3423717fd9bf-kube-api-access-cpwxz\") pod \"downloads-7954f5f757-ffh76\" (UID: \"fd740a32-9003-4d27-8c7b-3423717fd9bf\") " pod="openshift-console/downloads-7954f5f757-ffh76" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.613532 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9864123-f9a0-4524-87ad-131c294b1ffe-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-c2sv9\" (UID: \"e9864123-f9a0-4524-87ad-131c294b1ffe\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2sv9" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.614019 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfq2n\" (UniqueName: \"kubernetes.io/projected/c9d8c712-038d-47f2-931a-e6cf3af58665-kube-api-access-hfq2n\") pod \"ingress-operator-5b745b69d9-bhw7l\" (UID: \"c9d8c712-038d-47f2-931a-e6cf3af58665\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bhw7l" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.614087 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e909476f-4cf5-4240-a8bc-51ed96ff8fee-config\") pod \"etcd-operator-b45778765-sscgm\" (UID: \"e909476f-4cf5-4240-a8bc-51ed96ff8fee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sscgm" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.614153 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c022725c-9725-4d5c-a703-5d61c931d9e8-installation-pull-secrets\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.614374 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dfcd37a1-ff8a-4ed9-9100-3b9b0e1b5c24-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-9n9ld\" (UID: \"dfcd37a1-ff8a-4ed9-9100-3b9b0e1b5c24\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9n9ld" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.614409 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75b2f0db-60c5-4143-a42d-e62ee5599503-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4rph8\" (UID: \"75b2f0db-60c5-4143-a42d-e62ee5599503\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rph8" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.614425 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9864123-f9a0-4524-87ad-131c294b1ffe-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-c2sv9\" (UID: \"e9864123-f9a0-4524-87ad-131c294b1ffe\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2sv9" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.614451 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dfcd37a1-ff8a-4ed9-9100-3b9b0e1b5c24-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-9n9ld\" (UID: \"dfcd37a1-ff8a-4ed9-9100-3b9b0e1b5c24\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9n9ld" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.614484 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9864123-f9a0-4524-87ad-131c294b1ffe-config\") pod \"kube-apiserver-operator-766d6c64bb-c2sv9\" (UID: \"e9864123-f9a0-4524-87ad-131c294b1ffe\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2sv9" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.614530 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c022725c-9725-4d5c-a703-5d61c931d9e8-bound-sa-token\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.614651 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00d93965-f44b-4003-8093-13f33936021e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wd5ld\" (UID: \"00d93965-f44b-4003-8093-13f33936021e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wd5ld" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.615286 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6b927555-7584-46c4-ba20-4899aa734e9e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-v422r\" (UID: \"6b927555-7584-46c4-ba20-4899aa734e9e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v422r" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.615346 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c022725c-9725-4d5c-a703-5d61c931d9e8-registry-tls\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.615366 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4af8047b-d906-4458-84e9-4cbefe269b59-stats-auth\") pod \"router-default-5444994796-frw2d\" (UID: \"4af8047b-d906-4458-84e9-4cbefe269b59\") " pod="openshift-ingress/router-default-5444994796-frw2d" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.615384 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsg2r\" (UniqueName: \"kubernetes.io/projected/e909476f-4cf5-4240-a8bc-51ed96ff8fee-kube-api-access-tsg2r\") pod \"etcd-operator-b45778765-sscgm\" (UID: \"e909476f-4cf5-4240-a8bc-51ed96ff8fee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sscgm" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.615399 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00d93965-f44b-4003-8093-13f33936021e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wd5ld\" (UID: \"00d93965-f44b-4003-8093-13f33936021e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wd5ld" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.615423 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5hd6\" (UniqueName: \"kubernetes.io/projected/6b927555-7584-46c4-ba20-4899aa734e9e-kube-api-access-k5hd6\") pod \"olm-operator-6b444d44fb-v422r\" (UID: \"6b927555-7584-46c4-ba20-4899aa734e9e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v422r" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.615448 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b905a5a-09b5-4cce-a38d-34a92e704c0b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6l265\" (UID: \"5b905a5a-09b5-4cce-a38d-34a92e704c0b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6l265" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.615464 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/dfcd37a1-ff8a-4ed9-9100-3b9b0e1b5c24-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-9n9ld\" (UID: \"dfcd37a1-ff8a-4ed9-9100-3b9b0e1b5c24\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9n9ld" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.621506 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7c9rp"] Feb 02 17:17:21 crc kubenswrapper[4858]: W0202 17:17:21.654208 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b36dd0a_26c9_4d5f_ae02_aa432af223ad.slice/crio-0da56fd2ac3c9be5ee51ddb757f79ef3e8827a6116379ccaefbec418b3d0dd18 WatchSource:0}: Error finding container 0da56fd2ac3c9be5ee51ddb757f79ef3e8827a6116379ccaefbec418b3d0dd18: Status 404 returned error can't find the container with id 0da56fd2ac3c9be5ee51ddb757f79ef3e8827a6116379ccaefbec418b3d0dd18 Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.674469 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9wxc4"] Feb 02 17:17:21 crc kubenswrapper[4858]: W0202 17:17:21.697684 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42967b1d_ac6e_47d5_b67a_c7e5e8adc7d0.slice/crio-c16290e39f657fd1a897695cf7b6454aacea60212c0eebaf8f708974f115a5ed WatchSource:0}: Error finding container c16290e39f657fd1a897695cf7b6454aacea60212c0eebaf8f708974f115a5ed: Status 404 returned error can't find the container with id c16290e39f657fd1a897695cf7b6454aacea60212c0eebaf8f708974f115a5ed Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.708607 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-ssvjj"] Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.716039 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.716260 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttxcc\" (UniqueName: \"kubernetes.io/projected/d04acd9b-9b86-4119-b327-a3ee6f2690da-kube-api-access-ttxcc\") pod \"dns-operator-744455d44c-jw86f\" (UID: \"d04acd9b-9b86-4119-b327-a3ee6f2690da\") " pod="openshift-dns-operator/dns-operator-744455d44c-jw86f" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.716284 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjw7v\" (UniqueName: \"kubernetes.io/projected/b68e9688-43c8-4f2e-aafa-8b634249fa5e-kube-api-access-vjw7v\") pod \"machine-config-operator-74547568cd-xvhj7\" (UID: \"b68e9688-43c8-4f2e-aafa-8b634249fa5e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xvhj7" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.716304 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvjhd\" (UniqueName: \"kubernetes.io/projected/77955d8c-4a9c-4484-ad86-dbe070fb4451-kube-api-access-jvjhd\") pod \"migrator-59844c95c7-9dfzw\" (UID: \"77955d8c-4a9c-4484-ad86-dbe070fb4451\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9dfzw" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.716324 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/92f95f3e-0d8e-4ba7-a68f-438ca7f8c610-registration-dir\") pod \"csi-hostpathplugin-xr77f\" (UID: \"92f95f3e-0d8e-4ba7-a68f-438ca7f8c610\") " pod="hostpath-provisioner/csi-hostpathplugin-xr77f" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.716340 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/21a5ff9b-0143-4513-8139-84b24e0854af-srv-cert\") pod \"catalog-operator-68c6474976-g8xcw\" (UID: \"21a5ff9b-0143-4513-8139-84b24e0854af\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8xcw" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.716363 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8edb432-a713-4b13-a42d-510636c08f81-serving-cert\") pod \"console-operator-58897d9998-rnrz8\" (UID: \"d8edb432-a713-4b13-a42d-510636c08f81\") " pod="openshift-console-operator/console-operator-58897d9998-rnrz8" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.716379 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75b2f0db-60c5-4143-a42d-e62ee5599503-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4rph8\" (UID: \"75b2f0db-60c5-4143-a42d-e62ee5599503\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rph8" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.716394 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c9d8c712-038d-47f2-931a-e6cf3af58665-trusted-ca\") pod \"ingress-operator-5b745b69d9-bhw7l\" (UID: \"c9d8c712-038d-47f2-931a-e6cf3af58665\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bhw7l" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.716409 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fd108fb-dfcf-4826-a2e0-8e4877e90f0a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-dtf2d\" (UID: \"3fd108fb-dfcf-4826-a2e0-8e4877e90f0a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dtf2d" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.716425 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c022725c-9725-4d5c-a703-5d61c931d9e8-trusted-ca\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.716439 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e909476f-4cf5-4240-a8bc-51ed96ff8fee-serving-cert\") pod \"etcd-operator-b45778765-sscgm\" (UID: \"e909476f-4cf5-4240-a8bc-51ed96ff8fee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sscgm" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.716454 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89d9c9f7-5f7c-4cc0-add0-bd38785c308e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nd9bb\" (UID: \"89d9c9f7-5f7c-4cc0-add0-bd38785c308e\") " pod="openshift-marketplace/marketplace-operator-79b997595-nd9bb" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.716471 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b68e9688-43c8-4f2e-aafa-8b634249fa5e-proxy-tls\") pod \"machine-config-operator-74547568cd-xvhj7\" (UID: \"b68e9688-43c8-4f2e-aafa-8b634249fa5e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xvhj7" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.716486 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbr65\" (UniqueName: \"kubernetes.io/projected/f8ebe30f-bbdd-4bb2-8a0a-3e5ce4972d8f-kube-api-access-zbr65\") pod \"dns-default-q5rqg\" (UID: \"f8ebe30f-bbdd-4bb2-8a0a-3e5ce4972d8f\") " pod="openshift-dns/dns-default-q5rqg" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.716504 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djbk8\" (UniqueName: \"kubernetes.io/projected/b6abb10d-c9ad-4bd7-aa5e-5b07a8c2f68c-kube-api-access-djbk8\") pod \"service-ca-operator-777779d784-jxr6v\" (UID: \"b6abb10d-c9ad-4bd7-aa5e-5b07a8c2f68c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jxr6v" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.716522 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b68e9688-43c8-4f2e-aafa-8b634249fa5e-images\") pod \"machine-config-operator-74547568cd-xvhj7\" (UID: \"b68e9688-43c8-4f2e-aafa-8b634249fa5e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xvhj7" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.716545 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b905a5a-09b5-4cce-a38d-34a92e704c0b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6l265\" (UID: \"5b905a5a-09b5-4cce-a38d-34a92e704c0b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6l265" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.716564 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pc8k\" (UniqueName: \"kubernetes.io/projected/f3c72db6-4315-4210-9cfe-3c27b18e4abd-kube-api-access-2pc8k\") pod \"collect-profiles-29500875-tdl29\" (UID: \"f3c72db6-4315-4210-9cfe-3c27b18e4abd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500875-tdl29" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.716581 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65g4l\" (UniqueName: \"kubernetes.io/projected/21a5ff9b-0143-4513-8139-84b24e0854af-kube-api-access-65g4l\") pod \"catalog-operator-68c6474976-g8xcw\" (UID: \"21a5ff9b-0143-4513-8139-84b24e0854af\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8xcw" Feb 02 17:17:21 crc kubenswrapper[4858]: E0202 17:17:21.718870 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:22.218846539 +0000 UTC m=+143.371261804 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.720438 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c9d8c712-038d-47f2-931a-e6cf3af58665-bound-sa-token\") pod \"ingress-operator-5b745b69d9-bhw7l\" (UID: \"c9d8c712-038d-47f2-931a-e6cf3af58665\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bhw7l" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.720469 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df86f4d9-d92b-4b3b-9369-3adad4f3fbd1-config\") pod \"authentication-operator-69f744f599-qvvdf\" (UID: \"df86f4d9-d92b-4b3b-9369-3adad4f3fbd1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qvvdf" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.720489 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xbd8\" (UniqueName: \"kubernetes.io/projected/ba109ec6-f5c1-47a3-bb82-7464f3d5508e-kube-api-access-6xbd8\") pod \"machine-config-controller-84d6567774-lfg9z\" (UID: \"ba109ec6-f5c1-47a3-bb82-7464f3d5508e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lfg9z" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.720527 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f3c72db6-4315-4210-9cfe-3c27b18e4abd-secret-volume\") pod \"collect-profiles-29500875-tdl29\" (UID: \"f3c72db6-4315-4210-9cfe-3c27b18e4abd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500875-tdl29" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.720551 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pdjs\" (UniqueName: \"kubernetes.io/projected/3fd108fb-dfcf-4826-a2e0-8e4877e90f0a-kube-api-access-6pdjs\") pod \"kube-storage-version-migrator-operator-b67b599dd-dtf2d\" (UID: \"3fd108fb-dfcf-4826-a2e0-8e4877e90f0a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dtf2d" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.720590 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c9d8c712-038d-47f2-931a-e6cf3af58665-metrics-tls\") pod \"ingress-operator-5b745b69d9-bhw7l\" (UID: \"c9d8c712-038d-47f2-931a-e6cf3af58665\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bhw7l" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.720608 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvb4g\" (UniqueName: \"kubernetes.io/projected/247bf416-2fae-40d0-be7c-c573e55312f6-kube-api-access-lvb4g\") pod \"ingress-canary-wf65d\" (UID: \"247bf416-2fae-40d0-be7c-c573e55312f6\") " pod="openshift-ingress-canary/ingress-canary-wf65d" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.720624 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3c72db6-4315-4210-9cfe-3c27b18e4abd-config-volume\") pod \"collect-profiles-29500875-tdl29\" (UID: \"f3c72db6-4315-4210-9cfe-3c27b18e4abd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500875-tdl29" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.720641 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8edb432-a713-4b13-a42d-510636c08f81-config\") pod \"console-operator-58897d9998-rnrz8\" (UID: \"d8edb432-a713-4b13-a42d-510636c08f81\") " pod="openshift-console-operator/console-operator-58897d9998-rnrz8" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.720679 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/00d93965-f44b-4003-8093-13f33936021e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wd5ld\" (UID: \"00d93965-f44b-4003-8093-13f33936021e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wd5ld" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.720701 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d04acd9b-9b86-4119-b327-a3ee6f2690da-metrics-tls\") pod \"dns-operator-744455d44c-jw86f\" (UID: \"d04acd9b-9b86-4119-b327-a3ee6f2690da\") " pod="openshift-dns-operator/dns-operator-744455d44c-jw86f" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.720722 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df86f4d9-d92b-4b3b-9369-3adad4f3fbd1-service-ca-bundle\") pod \"authentication-operator-69f744f599-qvvdf\" (UID: \"df86f4d9-d92b-4b3b-9369-3adad4f3fbd1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qvvdf" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.720764 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d57s\" (UniqueName: \"kubernetes.io/projected/4af8047b-d906-4458-84e9-4cbefe269b59-kube-api-access-7d57s\") pod \"router-default-5444994796-frw2d\" (UID: \"4af8047b-d906-4458-84e9-4cbefe269b59\") " pod="openshift-ingress/router-default-5444994796-frw2d" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.720785 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e909476f-4cf5-4240-a8bc-51ed96ff8fee-etcd-ca\") pod \"etcd-operator-b45778765-sscgm\" (UID: \"e909476f-4cf5-4240-a8bc-51ed96ff8fee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sscgm" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.720801 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8edb432-a713-4b13-a42d-510636c08f81-trusted-ca\") pod \"console-operator-58897d9998-rnrz8\" (UID: \"d8edb432-a713-4b13-a42d-510636c08f81\") " pod="openshift-console-operator/console-operator-58897d9998-rnrz8" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.720835 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4af8047b-d906-4458-84e9-4cbefe269b59-service-ca-bundle\") pod \"router-default-5444994796-frw2d\" (UID: \"4af8047b-d906-4458-84e9-4cbefe269b59\") " pod="openshift-ingress/router-default-5444994796-frw2d" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.720853 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8ebe30f-bbdd-4bb2-8a0a-3e5ce4972d8f-config-volume\") pod \"dns-default-q5rqg\" (UID: \"f8ebe30f-bbdd-4bb2-8a0a-3e5ce4972d8f\") " pod="openshift-dns/dns-default-q5rqg" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.720869 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/670c2ac6-e01a-4a3b-922b-a8d8aadd693e-signing-cabundle\") pod \"service-ca-9c57cc56f-nzlsw\" (UID: \"670c2ac6-e01a-4a3b-922b-a8d8aadd693e\") " pod="openshift-service-ca/service-ca-9c57cc56f-nzlsw" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.720886 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpwxz\" (UniqueName: \"kubernetes.io/projected/fd740a32-9003-4d27-8c7b-3423717fd9bf-kube-api-access-cpwxz\") pod \"downloads-7954f5f757-ffh76\" (UID: \"fd740a32-9003-4d27-8c7b-3423717fd9bf\") " pod="openshift-console/downloads-7954f5f757-ffh76" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.720921 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e7977753-e2b1-4818-b536-98b053f60c3b-webhook-cert\") pod \"packageserver-d55dfcdfc-nxtrt\" (UID: \"e7977753-e2b1-4818-b536-98b053f60c3b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nxtrt" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.720941 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9864123-f9a0-4524-87ad-131c294b1ffe-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-c2sv9\" (UID: \"e9864123-f9a0-4524-87ad-131c294b1ffe\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2sv9" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.720956 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e7977753-e2b1-4818-b536-98b053f60c3b-tmpfs\") pod \"packageserver-d55dfcdfc-nxtrt\" (UID: \"e7977753-e2b1-4818-b536-98b053f60c3b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nxtrt" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.721002 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfq2n\" (UniqueName: \"kubernetes.io/projected/c9d8c712-038d-47f2-931a-e6cf3af58665-kube-api-access-hfq2n\") pod \"ingress-operator-5b745b69d9-bhw7l\" (UID: \"c9d8c712-038d-47f2-931a-e6cf3af58665\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bhw7l" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.721023 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e909476f-4cf5-4240-a8bc-51ed96ff8fee-config\") pod \"etcd-operator-b45778765-sscgm\" (UID: \"e909476f-4cf5-4240-a8bc-51ed96ff8fee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sscgm" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.721038 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx784\" (UniqueName: \"kubernetes.io/projected/2c1d5eb0-d864-4444-a3a1-a67447663431-kube-api-access-dx784\") pod \"machine-config-server-l9845\" (UID: \"2c1d5eb0-d864-4444-a3a1-a67447663431\") " pod="openshift-machine-config-operator/machine-config-server-l9845" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.721071 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75b2f0db-60c5-4143-a42d-e62ee5599503-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4rph8\" (UID: \"75b2f0db-60c5-4143-a42d-e62ee5599503\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rph8" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.721091 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9864123-f9a0-4524-87ad-131c294b1ffe-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-c2sv9\" (UID: \"e9864123-f9a0-4524-87ad-131c294b1ffe\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2sv9" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.721110 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c022725c-9725-4d5c-a703-5d61c931d9e8-installation-pull-secrets\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.721126 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dfcd37a1-ff8a-4ed9-9100-3b9b0e1b5c24-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-9n9ld\" (UID: \"dfcd37a1-ff8a-4ed9-9100-3b9b0e1b5c24\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9n9ld" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.721164 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/92f95f3e-0d8e-4ba7-a68f-438ca7f8c610-socket-dir\") pod \"csi-hostpathplugin-xr77f\" (UID: \"92f95f3e-0d8e-4ba7-a68f-438ca7f8c610\") " pod="hostpath-provisioner/csi-hostpathplugin-xr77f" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.721181 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c022725c-9725-4d5c-a703-5d61c931d9e8-bound-sa-token\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.721197 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dfcd37a1-ff8a-4ed9-9100-3b9b0e1b5c24-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-9n9ld\" (UID: \"dfcd37a1-ff8a-4ed9-9100-3b9b0e1b5c24\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9n9ld" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.721231 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9864123-f9a0-4524-87ad-131c294b1ffe-config\") pod \"kube-apiserver-operator-766d6c64bb-c2sv9\" (UID: \"e9864123-f9a0-4524-87ad-131c294b1ffe\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2sv9" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.721248 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/92f95f3e-0d8e-4ba7-a68f-438ca7f8c610-csi-data-dir\") pod \"csi-hostpathplugin-xr77f\" (UID: \"92f95f3e-0d8e-4ba7-a68f-438ca7f8c610\") " pod="hostpath-provisioner/csi-hostpathplugin-xr77f" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.721267 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df86f4d9-d92b-4b3b-9369-3adad4f3fbd1-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-qvvdf\" (UID: \"df86f4d9-d92b-4b3b-9369-3adad4f3fbd1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qvvdf" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.721281 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/2c1d5eb0-d864-4444-a3a1-a67447663431-node-bootstrap-token\") pod \"machine-config-server-l9845\" (UID: \"2c1d5eb0-d864-4444-a3a1-a67447663431\") " pod="openshift-machine-config-operator/machine-config-server-l9845" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.721315 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ba109ec6-f5c1-47a3-bb82-7464f3d5508e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lfg9z\" (UID: \"ba109ec6-f5c1-47a3-bb82-7464f3d5508e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lfg9z" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.721343 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00d93965-f44b-4003-8093-13f33936021e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wd5ld\" (UID: \"00d93965-f44b-4003-8093-13f33936021e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wd5ld" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.721363 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6b927555-7584-46c4-ba20-4899aa734e9e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-v422r\" (UID: \"6b927555-7584-46c4-ba20-4899aa734e9e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v422r" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.721401 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c022725c-9725-4d5c-a703-5d61c931d9e8-registry-tls\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.721419 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4af8047b-d906-4458-84e9-4cbefe269b59-stats-auth\") pod \"router-default-5444994796-frw2d\" (UID: \"4af8047b-d906-4458-84e9-4cbefe269b59\") " pod="openshift-ingress/router-default-5444994796-frw2d" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.721435 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df86f4d9-d92b-4b3b-9369-3adad4f3fbd1-serving-cert\") pod \"authentication-operator-69f744f599-qvvdf\" (UID: \"df86f4d9-d92b-4b3b-9369-3adad4f3fbd1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qvvdf" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.721473 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsg2r\" (UniqueName: \"kubernetes.io/projected/e909476f-4cf5-4240-a8bc-51ed96ff8fee-kube-api-access-tsg2r\") pod \"etcd-operator-b45778765-sscgm\" (UID: \"e909476f-4cf5-4240-a8bc-51ed96ff8fee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sscgm" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.721492 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00d93965-f44b-4003-8093-13f33936021e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wd5ld\" (UID: \"00d93965-f44b-4003-8093-13f33936021e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wd5ld" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.721507 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/247bf416-2fae-40d0-be7c-c573e55312f6-cert\") pod \"ingress-canary-wf65d\" (UID: \"247bf416-2fae-40d0-be7c-c573e55312f6\") " pod="openshift-ingress-canary/ingress-canary-wf65d" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.721522 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/2c1d5eb0-d864-4444-a3a1-a67447663431-certs\") pod \"machine-config-server-l9845\" (UID: \"2c1d5eb0-d864-4444-a3a1-a67447663431\") " pod="openshift-machine-config-operator/machine-config-server-l9845" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.721557 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5hd6\" (UniqueName: \"kubernetes.io/projected/6b927555-7584-46c4-ba20-4899aa734e9e-kube-api-access-k5hd6\") pod \"olm-operator-6b444d44fb-v422r\" (UID: \"6b927555-7584-46c4-ba20-4899aa734e9e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v422r" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.721573 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f8ebe30f-bbdd-4bb2-8a0a-3e5ce4972d8f-metrics-tls\") pod \"dns-default-q5rqg\" (UID: \"f8ebe30f-bbdd-4bb2-8a0a-3e5ce4972d8f\") " pod="openshift-dns/dns-default-q5rqg" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.721591 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b905a5a-09b5-4cce-a38d-34a92e704c0b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6l265\" (UID: \"5b905a5a-09b5-4cce-a38d-34a92e704c0b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6l265" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.721623 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/dfcd37a1-ff8a-4ed9-9100-3b9b0e1b5c24-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-9n9ld\" (UID: \"dfcd37a1-ff8a-4ed9-9100-3b9b0e1b5c24\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9n9ld" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.721642 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c022725c-9725-4d5c-a703-5d61c931d9e8-ca-trust-extracted\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.721659 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ba109ec6-f5c1-47a3-bb82-7464f3d5508e-proxy-tls\") pod \"machine-config-controller-84d6567774-lfg9z\" (UID: \"ba109ec6-f5c1-47a3-bb82-7464f3d5508e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lfg9z" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.721676 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75b2f0db-60c5-4143-a42d-e62ee5599503-config\") pod \"kube-controller-manager-operator-78b949d7b-4rph8\" (UID: \"75b2f0db-60c5-4143-a42d-e62ee5599503\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rph8" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.721719 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6jg6\" (UniqueName: \"kubernetes.io/projected/c022725c-9725-4d5c-a703-5d61c931d9e8-kube-api-access-c6jg6\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.721738 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/60962908-fe41-4333-80b3-ed8bbc9c4fcb-available-featuregates\") pod \"openshift-config-operator-7777fb866f-45snr\" (UID: \"60962908-fe41-4333-80b3-ed8bbc9c4fcb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-45snr" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.721758 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb9mj\" (UniqueName: \"kubernetes.io/projected/ac750cb2-c0b0-4044-82c1-41c0a46748e6-kube-api-access-qb9mj\") pod \"multus-admission-controller-857f4d67dd-8nr74\" (UID: \"ac750cb2-c0b0-4044-82c1-41c0a46748e6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8nr74" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.721795 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h85qc\" (UniqueName: \"kubernetes.io/projected/91c97885-896e-4947-907f-acb0a86ce947-kube-api-access-h85qc\") pod \"package-server-manager-789f6589d5-mj6r9\" (UID: \"91c97885-896e-4947-907f-acb0a86ce947\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mj6r9" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.721811 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c022725c-9725-4d5c-a703-5d61c931d9e8-registry-certificates\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.721828 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfjsz\" (UniqueName: \"kubernetes.io/projected/5b905a5a-09b5-4cce-a38d-34a92e704c0b-kube-api-access-mfjsz\") pod \"openshift-controller-manager-operator-756b6f6bc6-6l265\" (UID: \"5b905a5a-09b5-4cce-a38d-34a92e704c0b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6l265" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.721864 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65gng\" (UniqueName: \"kubernetes.io/projected/e330b41c-dacd-4c4b-a013-dd16a913ac54-kube-api-access-65gng\") pod \"control-plane-machine-set-operator-78cbb6b69f-hgpp5\" (UID: \"e330b41c-dacd-4c4b-a013-dd16a913ac54\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hgpp5" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.721897 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.721915 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e330b41c-dacd-4c4b-a013-dd16a913ac54-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-hgpp5\" (UID: \"e330b41c-dacd-4c4b-a013-dd16a913ac54\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hgpp5" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.721953 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6abb10d-c9ad-4bd7-aa5e-5b07a8c2f68c-config\") pod \"service-ca-operator-777779d784-jxr6v\" (UID: \"b6abb10d-c9ad-4bd7-aa5e-5b07a8c2f68c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jxr6v" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.721997 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfxpk\" (UniqueName: \"kubernetes.io/projected/dfcd37a1-ff8a-4ed9-9100-3b9b0e1b5c24-kube-api-access-vfxpk\") pod \"cluster-image-registry-operator-dc59b4c8b-9n9ld\" (UID: \"dfcd37a1-ff8a-4ed9-9100-3b9b0e1b5c24\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9n9ld" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.722015 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-799bl\" (UniqueName: \"kubernetes.io/projected/df86f4d9-d92b-4b3b-9369-3adad4f3fbd1-kube-api-access-799bl\") pod \"authentication-operator-69f744f599-qvvdf\" (UID: \"df86f4d9-d92b-4b3b-9369-3adad4f3fbd1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qvvdf" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.722033 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4af8047b-d906-4458-84e9-4cbefe269b59-metrics-certs\") pod \"router-default-5444994796-frw2d\" (UID: \"4af8047b-d906-4458-84e9-4cbefe269b59\") " pod="openshift-ingress/router-default-5444994796-frw2d" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.722048 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/92f95f3e-0d8e-4ba7-a68f-438ca7f8c610-mountpoint-dir\") pod \"csi-hostpathplugin-xr77f\" (UID: \"92f95f3e-0d8e-4ba7-a68f-438ca7f8c610\") " pod="hostpath-provisioner/csi-hostpathplugin-xr77f" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.722082 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7wzr\" (UniqueName: \"kubernetes.io/projected/92f95f3e-0d8e-4ba7-a68f-438ca7f8c610-kube-api-access-x7wzr\") pod \"csi-hostpathplugin-xr77f\" (UID: \"92f95f3e-0d8e-4ba7-a68f-438ca7f8c610\") " pod="hostpath-provisioner/csi-hostpathplugin-xr77f" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.722098 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2c9b\" (UniqueName: \"kubernetes.io/projected/670c2ac6-e01a-4a3b-922b-a8d8aadd693e-kube-api-access-t2c9b\") pod \"service-ca-9c57cc56f-nzlsw\" (UID: \"670c2ac6-e01a-4a3b-922b-a8d8aadd693e\") " pod="openshift-service-ca/service-ca-9c57cc56f-nzlsw" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.722119 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfnl7\" (UniqueName: \"kubernetes.io/projected/60962908-fe41-4333-80b3-ed8bbc9c4fcb-kube-api-access-sfnl7\") pod \"openshift-config-operator-7777fb866f-45snr\" (UID: \"60962908-fe41-4333-80b3-ed8bbc9c4fcb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-45snr" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.722155 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e909476f-4cf5-4240-a8bc-51ed96ff8fee-etcd-client\") pod \"etcd-operator-b45778765-sscgm\" (UID: \"e909476f-4cf5-4240-a8bc-51ed96ff8fee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sscgm" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.722176 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ac750cb2-c0b0-4044-82c1-41c0a46748e6-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-8nr74\" (UID: \"ac750cb2-c0b0-4044-82c1-41c0a46748e6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8nr74" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.722192 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60962908-fe41-4333-80b3-ed8bbc9c4fcb-serving-cert\") pod \"openshift-config-operator-7777fb866f-45snr\" (UID: \"60962908-fe41-4333-80b3-ed8bbc9c4fcb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-45snr" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.722228 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6b927555-7584-46c4-ba20-4899aa734e9e-srv-cert\") pod \"olm-operator-6b444d44fb-v422r\" (UID: \"6b927555-7584-46c4-ba20-4899aa734e9e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v422r" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.722247 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4af8047b-d906-4458-84e9-4cbefe269b59-default-certificate\") pod \"router-default-5444994796-frw2d\" (UID: \"4af8047b-d906-4458-84e9-4cbefe269b59\") " pod="openshift-ingress/router-default-5444994796-frw2d" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.722264 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/670c2ac6-e01a-4a3b-922b-a8d8aadd693e-signing-key\") pod \"service-ca-9c57cc56f-nzlsw\" (UID: \"670c2ac6-e01a-4a3b-922b-a8d8aadd693e\") " pod="openshift-service-ca/service-ca-9c57cc56f-nzlsw" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.722281 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xchr6\" (UniqueName: \"kubernetes.io/projected/d8edb432-a713-4b13-a42d-510636c08f81-kube-api-access-xchr6\") pod \"console-operator-58897d9998-rnrz8\" (UID: \"d8edb432-a713-4b13-a42d-510636c08f81\") " pod="openshift-console-operator/console-operator-58897d9998-rnrz8" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.722300 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/89d9c9f7-5f7c-4cc0-add0-bd38785c308e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nd9bb\" (UID: \"89d9c9f7-5f7c-4cc0-add0-bd38785c308e\") " pod="openshift-marketplace/marketplace-operator-79b997595-nd9bb" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.722319 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b68e9688-43c8-4f2e-aafa-8b634249fa5e-auth-proxy-config\") pod \"machine-config-operator-74547568cd-xvhj7\" (UID: \"b68e9688-43c8-4f2e-aafa-8b634249fa5e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xvhj7" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.722336 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e7977753-e2b1-4818-b536-98b053f60c3b-apiservice-cert\") pod \"packageserver-d55dfcdfc-nxtrt\" (UID: \"e7977753-e2b1-4818-b536-98b053f60c3b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nxtrt" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.722352 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/21a5ff9b-0143-4513-8139-84b24e0854af-profile-collector-cert\") pod \"catalog-operator-68c6474976-g8xcw\" (UID: \"21a5ff9b-0143-4513-8139-84b24e0854af\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8xcw" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.722377 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf69k\" (UniqueName: \"kubernetes.io/projected/e7977753-e2b1-4818-b536-98b053f60c3b-kube-api-access-pf69k\") pod \"packageserver-d55dfcdfc-nxtrt\" (UID: \"e7977753-e2b1-4818-b536-98b053f60c3b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nxtrt" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.722394 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/92f95f3e-0d8e-4ba7-a68f-438ca7f8c610-plugins-dir\") pod \"csi-hostpathplugin-xr77f\" (UID: \"92f95f3e-0d8e-4ba7-a68f-438ca7f8c610\") " pod="hostpath-provisioner/csi-hostpathplugin-xr77f" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.722411 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s69g4\" (UniqueName: \"kubernetes.io/projected/89d9c9f7-5f7c-4cc0-add0-bd38785c308e-kube-api-access-s69g4\") pod \"marketplace-operator-79b997595-nd9bb\" (UID: \"89d9c9f7-5f7c-4cc0-add0-bd38785c308e\") " pod="openshift-marketplace/marketplace-operator-79b997595-nd9bb" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.722429 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/91c97885-896e-4947-907f-acb0a86ce947-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-mj6r9\" (UID: \"91c97885-896e-4947-907f-acb0a86ce947\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mj6r9" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.722448 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6abb10d-c9ad-4bd7-aa5e-5b07a8c2f68c-serving-cert\") pod \"service-ca-operator-777779d784-jxr6v\" (UID: \"b6abb10d-c9ad-4bd7-aa5e-5b07a8c2f68c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jxr6v" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.722473 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3fd108fb-dfcf-4826-a2e0-8e4877e90f0a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-dtf2d\" (UID: \"3fd108fb-dfcf-4826-a2e0-8e4877e90f0a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dtf2d" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.722489 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e909476f-4cf5-4240-a8bc-51ed96ff8fee-etcd-service-ca\") pod \"etcd-operator-b45778765-sscgm\" (UID: \"e909476f-4cf5-4240-a8bc-51ed96ff8fee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sscgm" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.722655 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b68e9688-43c8-4f2e-aafa-8b634249fa5e-images\") pod \"machine-config-operator-74547568cd-xvhj7\" (UID: \"b68e9688-43c8-4f2e-aafa-8b634249fa5e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xvhj7" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.723084 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e909476f-4cf5-4240-a8bc-51ed96ff8fee-etcd-service-ca\") pod \"etcd-operator-b45778765-sscgm\" (UID: \"e909476f-4cf5-4240-a8bc-51ed96ff8fee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sscgm" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.723678 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00d93965-f44b-4003-8093-13f33936021e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wd5ld\" (UID: \"00d93965-f44b-4003-8093-13f33936021e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wd5ld" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.724231 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c022725c-9725-4d5c-a703-5d61c931d9e8-trusted-ca\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.724570 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fd108fb-dfcf-4826-a2e0-8e4877e90f0a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-dtf2d\" (UID: \"3fd108fb-dfcf-4826-a2e0-8e4877e90f0a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dtf2d" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.725069 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b905a5a-09b5-4cce-a38d-34a92e704c0b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6l265\" (UID: \"5b905a5a-09b5-4cce-a38d-34a92e704c0b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6l265" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.725358 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/60962908-fe41-4333-80b3-ed8bbc9c4fcb-available-featuregates\") pod \"openshift-config-operator-7777fb866f-45snr\" (UID: \"60962908-fe41-4333-80b3-ed8bbc9c4fcb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-45snr" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.725489 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c022725c-9725-4d5c-a703-5d61c931d9e8-ca-trust-extracted\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.725737 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4af8047b-d906-4458-84e9-4cbefe269b59-stats-auth\") pod \"router-default-5444994796-frw2d\" (UID: \"4af8047b-d906-4458-84e9-4cbefe269b59\") " pod="openshift-ingress/router-default-5444994796-frw2d" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.725763 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8edb432-a713-4b13-a42d-510636c08f81-serving-cert\") pod \"console-operator-58897d9998-rnrz8\" (UID: \"d8edb432-a713-4b13-a42d-510636c08f81\") " pod="openshift-console-operator/console-operator-58897d9998-rnrz8" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.725805 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75b2f0db-60c5-4143-a42d-e62ee5599503-config\") pod \"kube-controller-manager-operator-78b949d7b-4rph8\" (UID: \"75b2f0db-60c5-4143-a42d-e62ee5599503\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rph8" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.725838 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/21a5ff9b-0143-4513-8139-84b24e0854af-srv-cert\") pod \"catalog-operator-68c6474976-g8xcw\" (UID: \"21a5ff9b-0143-4513-8139-84b24e0854af\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8xcw" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.726866 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e909476f-4cf5-4240-a8bc-51ed96ff8fee-config\") pod \"etcd-operator-b45778765-sscgm\" (UID: \"e909476f-4cf5-4240-a8bc-51ed96ff8fee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sscgm" Feb 02 17:17:21 crc kubenswrapper[4858]: E0202 17:17:21.727160 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:22.227145064 +0000 UTC m=+143.379560399 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.727580 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c022725c-9725-4d5c-a703-5d61c931d9e8-registry-certificates\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.727587 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/dfcd37a1-ff8a-4ed9-9100-3b9b0e1b5c24-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-9n9ld\" (UID: \"dfcd37a1-ff8a-4ed9-9100-3b9b0e1b5c24\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9n9ld" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.728080 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c9d8c712-038d-47f2-931a-e6cf3af58665-trusted-ca\") pod \"ingress-operator-5b745b69d9-bhw7l\" (UID: \"c9d8c712-038d-47f2-931a-e6cf3af58665\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bhw7l" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.728323 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c9d8c712-038d-47f2-931a-e6cf3af58665-metrics-tls\") pod \"ingress-operator-5b745b69d9-bhw7l\" (UID: \"c9d8c712-038d-47f2-931a-e6cf3af58665\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bhw7l" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.729088 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e909476f-4cf5-4240-a8bc-51ed96ff8fee-serving-cert\") pod \"etcd-operator-b45778765-sscgm\" (UID: \"e909476f-4cf5-4240-a8bc-51ed96ff8fee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sscgm" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.730373 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/91c97885-896e-4947-907f-acb0a86ce947-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-mj6r9\" (UID: \"91c97885-896e-4947-907f-acb0a86ce947\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mj6r9" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.730866 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b68e9688-43c8-4f2e-aafa-8b634249fa5e-proxy-tls\") pod \"machine-config-operator-74547568cd-xvhj7\" (UID: \"b68e9688-43c8-4f2e-aafa-8b634249fa5e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xvhj7" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.731641 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00d93965-f44b-4003-8093-13f33936021e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wd5ld\" (UID: \"00d93965-f44b-4003-8093-13f33936021e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wd5ld" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.731707 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e330b41c-dacd-4c4b-a013-dd16a913ac54-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-hgpp5\" (UID: \"e330b41c-dacd-4c4b-a013-dd16a913ac54\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hgpp5" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.731904 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c022725c-9725-4d5c-a703-5d61c931d9e8-installation-pull-secrets\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.732410 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b68e9688-43c8-4f2e-aafa-8b634249fa5e-auth-proxy-config\") pod \"machine-config-operator-74547568cd-xvhj7\" (UID: \"b68e9688-43c8-4f2e-aafa-8b634249fa5e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xvhj7" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.733669 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e909476f-4cf5-4240-a8bc-51ed96ff8fee-etcd-client\") pod \"etcd-operator-b45778765-sscgm\" (UID: \"e909476f-4cf5-4240-a8bc-51ed96ff8fee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sscgm" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.733933 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9864123-f9a0-4524-87ad-131c294b1ffe-config\") pod \"kube-apiserver-operator-766d6c64bb-c2sv9\" (UID: \"e9864123-f9a0-4524-87ad-131c294b1ffe\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2sv9" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.734047 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/21a5ff9b-0143-4513-8139-84b24e0854af-profile-collector-cert\") pod \"catalog-operator-68c6474976-g8xcw\" (UID: \"21a5ff9b-0143-4513-8139-84b24e0854af\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8xcw" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.734764 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3fd108fb-dfcf-4826-a2e0-8e4877e90f0a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-dtf2d\" (UID: \"3fd108fb-dfcf-4826-a2e0-8e4877e90f0a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dtf2d" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.735087 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dfcd37a1-ff8a-4ed9-9100-3b9b0e1b5c24-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-9n9ld\" (UID: \"dfcd37a1-ff8a-4ed9-9100-3b9b0e1b5c24\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9n9ld" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.735711 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e909476f-4cf5-4240-a8bc-51ed96ff8fee-etcd-ca\") pod \"etcd-operator-b45778765-sscgm\" (UID: \"e909476f-4cf5-4240-a8bc-51ed96ff8fee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sscgm" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.735772 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4af8047b-d906-4458-84e9-4cbefe269b59-default-certificate\") pod \"router-default-5444994796-frw2d\" (UID: \"4af8047b-d906-4458-84e9-4cbefe269b59\") " pod="openshift-ingress/router-default-5444994796-frw2d" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.736188 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c022725c-9725-4d5c-a703-5d61c931d9e8-registry-tls\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.736459 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4af8047b-d906-4458-84e9-4cbefe269b59-service-ca-bundle\") pod \"router-default-5444994796-frw2d\" (UID: \"4af8047b-d906-4458-84e9-4cbefe269b59\") " pod="openshift-ingress/router-default-5444994796-frw2d" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.736691 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9864123-f9a0-4524-87ad-131c294b1ffe-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-c2sv9\" (UID: \"e9864123-f9a0-4524-87ad-131c294b1ffe\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2sv9" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.738225 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ac750cb2-c0b0-4044-82c1-41c0a46748e6-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-8nr74\" (UID: \"ac750cb2-c0b0-4044-82c1-41c0a46748e6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8nr74" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.737644 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8edb432-a713-4b13-a42d-510636c08f81-trusted-ca\") pod \"console-operator-58897d9998-rnrz8\" (UID: \"d8edb432-a713-4b13-a42d-510636c08f81\") " pod="openshift-console-operator/console-operator-58897d9998-rnrz8" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.738721 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b905a5a-09b5-4cce-a38d-34a92e704c0b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6l265\" (UID: \"5b905a5a-09b5-4cce-a38d-34a92e704c0b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6l265" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.739311 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d04acd9b-9b86-4119-b327-a3ee6f2690da-metrics-tls\") pod \"dns-operator-744455d44c-jw86f\" (UID: \"d04acd9b-9b86-4119-b327-a3ee6f2690da\") " pod="openshift-dns-operator/dns-operator-744455d44c-jw86f" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.740708 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6b927555-7584-46c4-ba20-4899aa734e9e-srv-cert\") pod \"olm-operator-6b444d44fb-v422r\" (UID: \"6b927555-7584-46c4-ba20-4899aa734e9e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v422r" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.740824 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75b2f0db-60c5-4143-a42d-e62ee5599503-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4rph8\" (UID: \"75b2f0db-60c5-4143-a42d-e62ee5599503\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rph8" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.741008 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8edb432-a713-4b13-a42d-510636c08f81-config\") pod \"console-operator-58897d9998-rnrz8\" (UID: \"d8edb432-a713-4b13-a42d-510636c08f81\") " pod="openshift-console-operator/console-operator-58897d9998-rnrz8" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.741354 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60962908-fe41-4333-80b3-ed8bbc9c4fcb-serving-cert\") pod \"openshift-config-operator-7777fb866f-45snr\" (UID: \"60962908-fe41-4333-80b3-ed8bbc9c4fcb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-45snr" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.744300 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4af8047b-d906-4458-84e9-4cbefe269b59-metrics-certs\") pod \"router-default-5444994796-frw2d\" (UID: \"4af8047b-d906-4458-84e9-4cbefe269b59\") " pod="openshift-ingress/router-default-5444994796-frw2d" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.745080 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6b927555-7584-46c4-ba20-4899aa734e9e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-v422r\" (UID: \"6b927555-7584-46c4-ba20-4899aa734e9e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v422r" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.754527 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttxcc\" (UniqueName: \"kubernetes.io/projected/d04acd9b-9b86-4119-b327-a3ee6f2690da-kube-api-access-ttxcc\") pod \"dns-operator-744455d44c-jw86f\" (UID: \"d04acd9b-9b86-4119-b327-a3ee6f2690da\") " pod="openshift-dns-operator/dns-operator-744455d44c-jw86f" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.780605 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvjhd\" (UniqueName: \"kubernetes.io/projected/77955d8c-4a9c-4484-ad86-dbe070fb4451-kube-api-access-jvjhd\") pod \"migrator-59844c95c7-9dfzw\" (UID: \"77955d8c-4a9c-4484-ad86-dbe070fb4451\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9dfzw" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.791415 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65g4l\" (UniqueName: \"kubernetes.io/projected/21a5ff9b-0143-4513-8139-84b24e0854af-kube-api-access-65g4l\") pod \"catalog-operator-68c6474976-g8xcw\" (UID: \"21a5ff9b-0143-4513-8139-84b24e0854af\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8xcw" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.822758 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75b2f0db-60c5-4143-a42d-e62ee5599503-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4rph8\" (UID: \"75b2f0db-60c5-4143-a42d-e62ee5599503\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rph8" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.822921 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:21 crc kubenswrapper[4858]: E0202 17:17:21.823074 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:22.323052239 +0000 UTC m=+143.475467504 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.824200 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/92f95f3e-0d8e-4ba7-a68f-438ca7f8c610-mountpoint-dir\") pod \"csi-hostpathplugin-xr77f\" (UID: \"92f95f3e-0d8e-4ba7-a68f-438ca7f8c610\") " pod="hostpath-provisioner/csi-hostpathplugin-xr77f" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.824236 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7wzr\" (UniqueName: \"kubernetes.io/projected/92f95f3e-0d8e-4ba7-a68f-438ca7f8c610-kube-api-access-x7wzr\") pod \"csi-hostpathplugin-xr77f\" (UID: \"92f95f3e-0d8e-4ba7-a68f-438ca7f8c610\") " pod="hostpath-provisioner/csi-hostpathplugin-xr77f" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.824264 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2c9b\" (UniqueName: \"kubernetes.io/projected/670c2ac6-e01a-4a3b-922b-a8d8aadd693e-kube-api-access-t2c9b\") pod \"service-ca-9c57cc56f-nzlsw\" (UID: \"670c2ac6-e01a-4a3b-922b-a8d8aadd693e\") " pod="openshift-service-ca/service-ca-9c57cc56f-nzlsw" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.824290 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/670c2ac6-e01a-4a3b-922b-a8d8aadd693e-signing-key\") pod \"service-ca-9c57cc56f-nzlsw\" (UID: \"670c2ac6-e01a-4a3b-922b-a8d8aadd693e\") " pod="openshift-service-ca/service-ca-9c57cc56f-nzlsw" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.824346 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/92f95f3e-0d8e-4ba7-a68f-438ca7f8c610-mountpoint-dir\") pod \"csi-hostpathplugin-xr77f\" (UID: \"92f95f3e-0d8e-4ba7-a68f-438ca7f8c610\") " pod="hostpath-provisioner/csi-hostpathplugin-xr77f" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.824450 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/89d9c9f7-5f7c-4cc0-add0-bd38785c308e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nd9bb\" (UID: \"89d9c9f7-5f7c-4cc0-add0-bd38785c308e\") " pod="openshift-marketplace/marketplace-operator-79b997595-nd9bb" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.824475 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e7977753-e2b1-4818-b536-98b053f60c3b-apiservice-cert\") pod \"packageserver-d55dfcdfc-nxtrt\" (UID: \"e7977753-e2b1-4818-b536-98b053f60c3b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nxtrt" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.824848 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf69k\" (UniqueName: \"kubernetes.io/projected/e7977753-e2b1-4818-b536-98b053f60c3b-kube-api-access-pf69k\") pod \"packageserver-d55dfcdfc-nxtrt\" (UID: \"e7977753-e2b1-4818-b536-98b053f60c3b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nxtrt" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.824868 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s69g4\" (UniqueName: \"kubernetes.io/projected/89d9c9f7-5f7c-4cc0-add0-bd38785c308e-kube-api-access-s69g4\") pod \"marketplace-operator-79b997595-nd9bb\" (UID: \"89d9c9f7-5f7c-4cc0-add0-bd38785c308e\") " pod="openshift-marketplace/marketplace-operator-79b997595-nd9bb" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.824884 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/92f95f3e-0d8e-4ba7-a68f-438ca7f8c610-plugins-dir\") pod \"csi-hostpathplugin-xr77f\" (UID: \"92f95f3e-0d8e-4ba7-a68f-438ca7f8c610\") " pod="hostpath-provisioner/csi-hostpathplugin-xr77f" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.824901 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6abb10d-c9ad-4bd7-aa5e-5b07a8c2f68c-serving-cert\") pod \"service-ca-operator-777779d784-jxr6v\" (UID: \"b6abb10d-c9ad-4bd7-aa5e-5b07a8c2f68c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jxr6v" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.824946 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/92f95f3e-0d8e-4ba7-a68f-438ca7f8c610-registration-dir\") pod \"csi-hostpathplugin-xr77f\" (UID: \"92f95f3e-0d8e-4ba7-a68f-438ca7f8c610\") " pod="hostpath-provisioner/csi-hostpathplugin-xr77f" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.825023 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89d9c9f7-5f7c-4cc0-add0-bd38785c308e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nd9bb\" (UID: \"89d9c9f7-5f7c-4cc0-add0-bd38785c308e\") " pod="openshift-marketplace/marketplace-operator-79b997595-nd9bb" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.825041 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbr65\" (UniqueName: \"kubernetes.io/projected/f8ebe30f-bbdd-4bb2-8a0a-3e5ce4972d8f-kube-api-access-zbr65\") pod \"dns-default-q5rqg\" (UID: \"f8ebe30f-bbdd-4bb2-8a0a-3e5ce4972d8f\") " pod="openshift-dns/dns-default-q5rqg" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.825058 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djbk8\" (UniqueName: \"kubernetes.io/projected/b6abb10d-c9ad-4bd7-aa5e-5b07a8c2f68c-kube-api-access-djbk8\") pod \"service-ca-operator-777779d784-jxr6v\" (UID: \"b6abb10d-c9ad-4bd7-aa5e-5b07a8c2f68c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jxr6v" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.825078 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pc8k\" (UniqueName: \"kubernetes.io/projected/f3c72db6-4315-4210-9cfe-3c27b18e4abd-kube-api-access-2pc8k\") pod \"collect-profiles-29500875-tdl29\" (UID: \"f3c72db6-4315-4210-9cfe-3c27b18e4abd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500875-tdl29" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.825100 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df86f4d9-d92b-4b3b-9369-3adad4f3fbd1-config\") pod \"authentication-operator-69f744f599-qvvdf\" (UID: \"df86f4d9-d92b-4b3b-9369-3adad4f3fbd1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qvvdf" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.825129 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xbd8\" (UniqueName: \"kubernetes.io/projected/ba109ec6-f5c1-47a3-bb82-7464f3d5508e-kube-api-access-6xbd8\") pod \"machine-config-controller-84d6567774-lfg9z\" (UID: \"ba109ec6-f5c1-47a3-bb82-7464f3d5508e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lfg9z" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.825166 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f3c72db6-4315-4210-9cfe-3c27b18e4abd-secret-volume\") pod \"collect-profiles-29500875-tdl29\" (UID: \"f3c72db6-4315-4210-9cfe-3c27b18e4abd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500875-tdl29" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.825186 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvb4g\" (UniqueName: \"kubernetes.io/projected/247bf416-2fae-40d0-be7c-c573e55312f6-kube-api-access-lvb4g\") pod \"ingress-canary-wf65d\" (UID: \"247bf416-2fae-40d0-be7c-c573e55312f6\") " pod="openshift-ingress-canary/ingress-canary-wf65d" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.825202 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3c72db6-4315-4210-9cfe-3c27b18e4abd-config-volume\") pod \"collect-profiles-29500875-tdl29\" (UID: \"f3c72db6-4315-4210-9cfe-3c27b18e4abd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500875-tdl29" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.825230 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df86f4d9-d92b-4b3b-9369-3adad4f3fbd1-service-ca-bundle\") pod \"authentication-operator-69f744f599-qvvdf\" (UID: \"df86f4d9-d92b-4b3b-9369-3adad4f3fbd1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qvvdf" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.825262 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8ebe30f-bbdd-4bb2-8a0a-3e5ce4972d8f-config-volume\") pod \"dns-default-q5rqg\" (UID: \"f8ebe30f-bbdd-4bb2-8a0a-3e5ce4972d8f\") " pod="openshift-dns/dns-default-q5rqg" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.825276 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/670c2ac6-e01a-4a3b-922b-a8d8aadd693e-signing-cabundle\") pod \"service-ca-9c57cc56f-nzlsw\" (UID: \"670c2ac6-e01a-4a3b-922b-a8d8aadd693e\") " pod="openshift-service-ca/service-ca-9c57cc56f-nzlsw" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.825301 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e7977753-e2b1-4818-b536-98b053f60c3b-tmpfs\") pod \"packageserver-d55dfcdfc-nxtrt\" (UID: \"e7977753-e2b1-4818-b536-98b053f60c3b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nxtrt" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.825316 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e7977753-e2b1-4818-b536-98b053f60c3b-webhook-cert\") pod \"packageserver-d55dfcdfc-nxtrt\" (UID: \"e7977753-e2b1-4818-b536-98b053f60c3b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nxtrt" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.825345 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx784\" (UniqueName: \"kubernetes.io/projected/2c1d5eb0-d864-4444-a3a1-a67447663431-kube-api-access-dx784\") pod \"machine-config-server-l9845\" (UID: \"2c1d5eb0-d864-4444-a3a1-a67447663431\") " pod="openshift-machine-config-operator/machine-config-server-l9845" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.825363 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/92f95f3e-0d8e-4ba7-a68f-438ca7f8c610-socket-dir\") pod \"csi-hostpathplugin-xr77f\" (UID: \"92f95f3e-0d8e-4ba7-a68f-438ca7f8c610\") " pod="hostpath-provisioner/csi-hostpathplugin-xr77f" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.825391 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/92f95f3e-0d8e-4ba7-a68f-438ca7f8c610-csi-data-dir\") pod \"csi-hostpathplugin-xr77f\" (UID: \"92f95f3e-0d8e-4ba7-a68f-438ca7f8c610\") " pod="hostpath-provisioner/csi-hostpathplugin-xr77f" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.825407 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df86f4d9-d92b-4b3b-9369-3adad4f3fbd1-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-qvvdf\" (UID: \"df86f4d9-d92b-4b3b-9369-3adad4f3fbd1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qvvdf" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.825422 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/2c1d5eb0-d864-4444-a3a1-a67447663431-node-bootstrap-token\") pod \"machine-config-server-l9845\" (UID: \"2c1d5eb0-d864-4444-a3a1-a67447663431\") " pod="openshift-machine-config-operator/machine-config-server-l9845" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.825440 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ba109ec6-f5c1-47a3-bb82-7464f3d5508e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lfg9z\" (UID: \"ba109ec6-f5c1-47a3-bb82-7464f3d5508e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lfg9z" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.825488 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df86f4d9-d92b-4b3b-9369-3adad4f3fbd1-serving-cert\") pod \"authentication-operator-69f744f599-qvvdf\" (UID: \"df86f4d9-d92b-4b3b-9369-3adad4f3fbd1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qvvdf" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.825511 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/247bf416-2fae-40d0-be7c-c573e55312f6-cert\") pod \"ingress-canary-wf65d\" (UID: \"247bf416-2fae-40d0-be7c-c573e55312f6\") " pod="openshift-ingress-canary/ingress-canary-wf65d" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.825528 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/2c1d5eb0-d864-4444-a3a1-a67447663431-certs\") pod \"machine-config-server-l9845\" (UID: \"2c1d5eb0-d864-4444-a3a1-a67447663431\") " pod="openshift-machine-config-operator/machine-config-server-l9845" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.825548 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f8ebe30f-bbdd-4bb2-8a0a-3e5ce4972d8f-metrics-tls\") pod \"dns-default-q5rqg\" (UID: \"f8ebe30f-bbdd-4bb2-8a0a-3e5ce4972d8f\") " pod="openshift-dns/dns-default-q5rqg" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.825566 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ba109ec6-f5c1-47a3-bb82-7464f3d5508e-proxy-tls\") pod \"machine-config-controller-84d6567774-lfg9z\" (UID: \"ba109ec6-f5c1-47a3-bb82-7464f3d5508e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lfg9z" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.825620 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.825644 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6abb10d-c9ad-4bd7-aa5e-5b07a8c2f68c-config\") pod \"service-ca-operator-777779d784-jxr6v\" (UID: \"b6abb10d-c9ad-4bd7-aa5e-5b07a8c2f68c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jxr6v" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.825671 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-799bl\" (UniqueName: \"kubernetes.io/projected/df86f4d9-d92b-4b3b-9369-3adad4f3fbd1-kube-api-access-799bl\") pod \"authentication-operator-69f744f599-qvvdf\" (UID: \"df86f4d9-d92b-4b3b-9369-3adad4f3fbd1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qvvdf" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.825968 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/92f95f3e-0d8e-4ba7-a68f-438ca7f8c610-plugins-dir\") pod \"csi-hostpathplugin-xr77f\" (UID: \"92f95f3e-0d8e-4ba7-a68f-438ca7f8c610\") " pod="hostpath-provisioner/csi-hostpathplugin-xr77f" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.826411 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/92f95f3e-0d8e-4ba7-a68f-438ca7f8c610-registration-dir\") pod \"csi-hostpathplugin-xr77f\" (UID: \"92f95f3e-0d8e-4ba7-a68f-438ca7f8c610\") " pod="hostpath-provisioner/csi-hostpathplugin-xr77f" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.826596 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e7977753-e2b1-4818-b536-98b053f60c3b-tmpfs\") pod \"packageserver-d55dfcdfc-nxtrt\" (UID: \"e7977753-e2b1-4818-b536-98b053f60c3b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nxtrt" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.827718 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/89d9c9f7-5f7c-4cc0-add0-bd38785c308e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nd9bb\" (UID: \"89d9c9f7-5f7c-4cc0-add0-bd38785c308e\") " pod="openshift-marketplace/marketplace-operator-79b997595-nd9bb" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.827964 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3c72db6-4315-4210-9cfe-3c27b18e4abd-config-volume\") pod \"collect-profiles-29500875-tdl29\" (UID: \"f3c72db6-4315-4210-9cfe-3c27b18e4abd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500875-tdl29" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.828243 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e7977753-e2b1-4818-b536-98b053f60c3b-apiservice-cert\") pod \"packageserver-d55dfcdfc-nxtrt\" (UID: \"e7977753-e2b1-4818-b536-98b053f60c3b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nxtrt" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.828354 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/92f95f3e-0d8e-4ba7-a68f-438ca7f8c610-socket-dir\") pod \"csi-hostpathplugin-xr77f\" (UID: \"92f95f3e-0d8e-4ba7-a68f-438ca7f8c610\") " pod="hostpath-provisioner/csi-hostpathplugin-xr77f" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.829381 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/670c2ac6-e01a-4a3b-922b-a8d8aadd693e-signing-key\") pod \"service-ca-9c57cc56f-nzlsw\" (UID: \"670c2ac6-e01a-4a3b-922b-a8d8aadd693e\") " pod="openshift-service-ca/service-ca-9c57cc56f-nzlsw" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.829552 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df86f4d9-d92b-4b3b-9369-3adad4f3fbd1-service-ca-bundle\") pod \"authentication-operator-69f744f599-qvvdf\" (UID: \"df86f4d9-d92b-4b3b-9369-3adad4f3fbd1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qvvdf" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.830007 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df86f4d9-d92b-4b3b-9369-3adad4f3fbd1-config\") pod \"authentication-operator-69f744f599-qvvdf\" (UID: \"df86f4d9-d92b-4b3b-9369-3adad4f3fbd1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qvvdf" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.830190 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8ebe30f-bbdd-4bb2-8a0a-3e5ce4972d8f-config-volume\") pod \"dns-default-q5rqg\" (UID: \"f8ebe30f-bbdd-4bb2-8a0a-3e5ce4972d8f\") " pod="openshift-dns/dns-default-q5rqg" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.830194 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89d9c9f7-5f7c-4cc0-add0-bd38785c308e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nd9bb\" (UID: \"89d9c9f7-5f7c-4cc0-add0-bd38785c308e\") " pod="openshift-marketplace/marketplace-operator-79b997595-nd9bb" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.830234 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ba109ec6-f5c1-47a3-bb82-7464f3d5508e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lfg9z\" (UID: \"ba109ec6-f5c1-47a3-bb82-7464f3d5508e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lfg9z" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.830257 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/92f95f3e-0d8e-4ba7-a68f-438ca7f8c610-csi-data-dir\") pod \"csi-hostpathplugin-xr77f\" (UID: \"92f95f3e-0d8e-4ba7-a68f-438ca7f8c610\") " pod="hostpath-provisioner/csi-hostpathplugin-xr77f" Feb 02 17:17:21 crc kubenswrapper[4858]: E0202 17:17:21.830463 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:22.330447368 +0000 UTC m=+143.482862633 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.830892 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6abb10d-c9ad-4bd7-aa5e-5b07a8c2f68c-serving-cert\") pod \"service-ca-operator-777779d784-jxr6v\" (UID: \"b6abb10d-c9ad-4bd7-aa5e-5b07a8c2f68c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jxr6v" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.831185 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/670c2ac6-e01a-4a3b-922b-a8d8aadd693e-signing-cabundle\") pod \"service-ca-9c57cc56f-nzlsw\" (UID: \"670c2ac6-e01a-4a3b-922b-a8d8aadd693e\") " pod="openshift-service-ca/service-ca-9c57cc56f-nzlsw" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.831766 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df86f4d9-d92b-4b3b-9369-3adad4f3fbd1-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-qvvdf\" (UID: \"df86f4d9-d92b-4b3b-9369-3adad4f3fbd1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qvvdf" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.833246 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df86f4d9-d92b-4b3b-9369-3adad4f3fbd1-serving-cert\") pod \"authentication-operator-69f744f599-qvvdf\" (UID: \"df86f4d9-d92b-4b3b-9369-3adad4f3fbd1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qvvdf" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.833259 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f8ebe30f-bbdd-4bb2-8a0a-3e5ce4972d8f-metrics-tls\") pod \"dns-default-q5rqg\" (UID: \"f8ebe30f-bbdd-4bb2-8a0a-3e5ce4972d8f\") " pod="openshift-dns/dns-default-q5rqg" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.833283 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f3c72db6-4315-4210-9cfe-3c27b18e4abd-secret-volume\") pod \"collect-profiles-29500875-tdl29\" (UID: \"f3c72db6-4315-4210-9cfe-3c27b18e4abd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500875-tdl29" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.833535 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/2c1d5eb0-d864-4444-a3a1-a67447663431-node-bootstrap-token\") pod \"machine-config-server-l9845\" (UID: \"2c1d5eb0-d864-4444-a3a1-a67447663431\") " pod="openshift-machine-config-operator/machine-config-server-l9845" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.834184 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6abb10d-c9ad-4bd7-aa5e-5b07a8c2f68c-config\") pod \"service-ca-operator-777779d784-jxr6v\" (UID: \"b6abb10d-c9ad-4bd7-aa5e-5b07a8c2f68c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jxr6v" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.834212 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e7977753-e2b1-4818-b536-98b053f60c3b-webhook-cert\") pod \"packageserver-d55dfcdfc-nxtrt\" (UID: \"e7977753-e2b1-4818-b536-98b053f60c3b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nxtrt" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.834325 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/247bf416-2fae-40d0-be7c-c573e55312f6-cert\") pod \"ingress-canary-wf65d\" (UID: \"247bf416-2fae-40d0-be7c-c573e55312f6\") " pod="openshift-ingress-canary/ingress-canary-wf65d" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.835081 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ba109ec6-f5c1-47a3-bb82-7464f3d5508e-proxy-tls\") pod \"machine-config-controller-84d6567774-lfg9z\" (UID: \"ba109ec6-f5c1-47a3-bb82-7464f3d5508e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lfg9z" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.835552 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjw7v\" (UniqueName: \"kubernetes.io/projected/b68e9688-43c8-4f2e-aafa-8b634249fa5e-kube-api-access-vjw7v\") pod \"machine-config-operator-74547568cd-xvhj7\" (UID: \"b68e9688-43c8-4f2e-aafa-8b634249fa5e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xvhj7" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.835825 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/2c1d5eb0-d864-4444-a3a1-a67447663431-certs\") pod \"machine-config-server-l9845\" (UID: \"2c1d5eb0-d864-4444-a3a1-a67447663431\") " pod="openshift-machine-config-operator/machine-config-server-l9845" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.850909 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c9d8c712-038d-47f2-931a-e6cf3af58665-bound-sa-token\") pod \"ingress-operator-5b745b69d9-bhw7l\" (UID: \"c9d8c712-038d-47f2-931a-e6cf3af58665\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bhw7l" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.872027 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pdjs\" (UniqueName: \"kubernetes.io/projected/3fd108fb-dfcf-4826-a2e0-8e4877e90f0a-kube-api-access-6pdjs\") pod \"kube-storage-version-migrator-operator-b67b599dd-dtf2d\" (UID: \"3fd108fb-dfcf-4826-a2e0-8e4877e90f0a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dtf2d" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.890493 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsg2r\" (UniqueName: \"kubernetes.io/projected/e909476f-4cf5-4240-a8bc-51ed96ff8fee-kube-api-access-tsg2r\") pod \"etcd-operator-b45778765-sscgm\" (UID: \"e909476f-4cf5-4240-a8bc-51ed96ff8fee\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sscgm" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.913163 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5hd6\" (UniqueName: \"kubernetes.io/projected/6b927555-7584-46c4-ba20-4899aa734e9e-kube-api-access-k5hd6\") pod \"olm-operator-6b444d44fb-v422r\" (UID: \"6b927555-7584-46c4-ba20-4899aa734e9e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v422r" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.928823 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:21 crc kubenswrapper[4858]: E0202 17:17:21.929362 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:22.429341432 +0000 UTC m=+143.581756697 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.931847 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfjsz\" (UniqueName: \"kubernetes.io/projected/5b905a5a-09b5-4cce-a38d-34a92e704c0b-kube-api-access-mfjsz\") pod \"openshift-controller-manager-operator-756b6f6bc6-6l265\" (UID: \"5b905a5a-09b5-4cce-a38d-34a92e704c0b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6l265" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.947095 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9t6d4"] Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.951929 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-lvvvj"] Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.953816 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb9mj\" (UniqueName: \"kubernetes.io/projected/ac750cb2-c0b0-4044-82c1-41c0a46748e6-kube-api-access-qb9mj\") pod \"multus-admission-controller-857f4d67dd-8nr74\" (UID: \"ac750cb2-c0b0-4044-82c1-41c0a46748e6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8nr74" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.955561 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-jw86f" Feb 02 17:17:21 crc kubenswrapper[4858]: W0202 17:17:21.956416 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5109f31b_0e6e_447b_90cc_78ebbc465626.slice/crio-792260f389071e9654af5ad0c544de7ff17d3ee48f823ac8f6d0b7bc8fc794a8 WatchSource:0}: Error finding container 792260f389071e9654af5ad0c544de7ff17d3ee48f823ac8f6d0b7bc8fc794a8: Status 404 returned error can't find the container with id 792260f389071e9654af5ad0c544de7ff17d3ee48f823ac8f6d0b7bc8fc794a8 Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.969821 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-sscgm" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.974320 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h85qc\" (UniqueName: \"kubernetes.io/projected/91c97885-896e-4947-907f-acb0a86ce947-kube-api-access-h85qc\") pod \"package-server-manager-789f6589d5-mj6r9\" (UID: \"91c97885-896e-4947-907f-acb0a86ce947\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mj6r9" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.977995 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rph8" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.991911 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xvhj7" Feb 02 17:17:21 crc kubenswrapper[4858]: I0202 17:17:21.993091 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9864123-f9a0-4524-87ad-131c294b1ffe-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-c2sv9\" (UID: \"e9864123-f9a0-4524-87ad-131c294b1ffe\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2sv9" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.015077 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dtf2d" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.024965 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mj6r9" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.025347 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65gng\" (UniqueName: \"kubernetes.io/projected/e330b41c-dacd-4c4b-a013-dd16a913ac54-kube-api-access-65gng\") pod \"control-plane-machine-set-operator-78cbb6b69f-hgpp5\" (UID: \"e330b41c-dacd-4c4b-a013-dd16a913ac54\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hgpp5" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.027266 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8xcw" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.029208 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9n5ph"] Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.029263 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-npxd5"] Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.031683 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:22 crc kubenswrapper[4858]: E0202 17:17:22.032235 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:22.532220994 +0000 UTC m=+143.684636259 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.035202 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfnl7\" (UniqueName: \"kubernetes.io/projected/60962908-fe41-4333-80b3-ed8bbc9c4fcb-kube-api-access-sfnl7\") pod \"openshift-config-operator-7777fb866f-45snr\" (UID: \"60962908-fe41-4333-80b3-ed8bbc9c4fcb\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-45snr" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.038558 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9dfzw" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.045792 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-8nr74" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.054235 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dfcd37a1-ff8a-4ed9-9100-3b9b0e1b5c24-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-9n9ld\" (UID: \"dfcd37a1-ff8a-4ed9-9100-3b9b0e1b5c24\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9n9ld" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.057011 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v422r" Feb 02 17:17:22 crc kubenswrapper[4858]: W0202 17:17:22.071648 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ce76d15_6d25_4fe4_88e8_bde4a27c5a73.slice/crio-b168df4970b8b3cf896486a9f4e4574cfc47d0f873774be06aa8b8fb1994cdf2 WatchSource:0}: Error finding container b168df4970b8b3cf896486a9f4e4574cfc47d0f873774be06aa8b8fb1994cdf2: Status 404 returned error can't find the container with id b168df4970b8b3cf896486a9f4e4574cfc47d0f873774be06aa8b8fb1994cdf2 Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.074946 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6jg6\" (UniqueName: \"kubernetes.io/projected/c022725c-9725-4d5c-a703-5d61c931d9e8-kube-api-access-c6jg6\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.095185 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2sv9" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.103099 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfxpk\" (UniqueName: \"kubernetes.io/projected/dfcd37a1-ff8a-4ed9-9100-3b9b0e1b5c24-kube-api-access-vfxpk\") pod \"cluster-image-registry-operator-dc59b4c8b-9n9ld\" (UID: \"dfcd37a1-ff8a-4ed9-9100-3b9b0e1b5c24\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9n9ld" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.130496 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6l265" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.132353 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:22 crc kubenswrapper[4858]: E0202 17:17:22.132500 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:22.632479768 +0000 UTC m=+143.784895033 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.132899 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:22 crc kubenswrapper[4858]: E0202 17:17:22.133354 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:22.633342843 +0000 UTC m=+143.785758108 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.133656 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-45snr" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.145618 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xchr6\" (UniqueName: \"kubernetes.io/projected/d8edb432-a713-4b13-a42d-510636c08f81-kube-api-access-xchr6\") pod \"console-operator-58897d9998-rnrz8\" (UID: \"d8edb432-a713-4b13-a42d-510636c08f81\") " pod="openshift-console-operator/console-operator-58897d9998-rnrz8" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.151473 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-jw86f"] Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.176251 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfq2n\" (UniqueName: \"kubernetes.io/projected/c9d8c712-038d-47f2-931a-e6cf3af58665-kube-api-access-hfq2n\") pod \"ingress-operator-5b745b69d9-bhw7l\" (UID: \"c9d8c712-038d-47f2-931a-e6cf3af58665\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bhw7l" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.190103 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c022725c-9725-4d5c-a703-5d61c931d9e8-bound-sa-token\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.208911 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpwxz\" (UniqueName: \"kubernetes.io/projected/fd740a32-9003-4d27-8c7b-3423717fd9bf-kube-api-access-cpwxz\") pod \"downloads-7954f5f757-ffh76\" (UID: \"fd740a32-9003-4d27-8c7b-3423717fd9bf\") " pod="openshift-console/downloads-7954f5f757-ffh76" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.218251 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d57s\" (UniqueName: \"kubernetes.io/projected/4af8047b-d906-4458-84e9-4cbefe269b59-kube-api-access-7d57s\") pod \"router-default-5444994796-frw2d\" (UID: \"4af8047b-d906-4458-84e9-4cbefe269b59\") " pod="openshift-ingress/router-default-5444994796-frw2d" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.222859 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-7c9rp" event={"ID":"0b36dd0a-26c9-4d5f-ae02-aa432af223ad","Type":"ContainerStarted","Data":"6bd65021832fb1ebf57569393e6c2a2b4138dcc0981daf10cdf45f8be31dd190"} Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.222921 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-7c9rp" event={"ID":"0b36dd0a-26c9-4d5f-ae02-aa432af223ad","Type":"ContainerStarted","Data":"0da56fd2ac3c9be5ee51ddb757f79ef3e8827a6116379ccaefbec418b3d0dd18"} Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.228478 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-7c9rp" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.232058 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/00d93965-f44b-4003-8093-13f33936021e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wd5ld\" (UID: \"00d93965-f44b-4003-8093-13f33936021e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wd5ld" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.232209 4858 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-7c9rp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.232246 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-7c9rp" podUID="0b36dd0a-26c9-4d5f-ae02-aa432af223ad" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.233743 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:22 crc kubenswrapper[4858]: E0202 17:17:22.233863 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:22.733847995 +0000 UTC m=+143.886263260 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.233935 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:22 crc kubenswrapper[4858]: E0202 17:17:22.234245 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:22.734234966 +0000 UTC m=+143.886650231 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.239236 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zq59f" event={"ID":"24d4e737-2ce5-405b-ba6a-74310353dd54","Type":"ContainerStarted","Data":"41ae7a71e7feaeef0dff5f2a876203e097ea47b3a4aee30a59698fd86baef121"} Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.239290 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zq59f" event={"ID":"24d4e737-2ce5-405b-ba6a-74310353dd54","Type":"ContainerStarted","Data":"6737d0297e4ad0dd83f992c62ade3e5cd675698ce949486e2c464945b8db055b"} Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.239305 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zq59f" event={"ID":"24d4e737-2ce5-405b-ba6a-74310353dd54","Type":"ContainerStarted","Data":"19b3abec6d5f2e58ea58914a12b60d6f174fa28a15add658829e0f32f855df11"} Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.241812 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9n9ld" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.242104 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rph8"] Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.247941 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-rnrz8" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.248290 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9t6d4" event={"ID":"b330afef-9be2-4944-b014-0b6b2478316d","Type":"ContainerStarted","Data":"48ab0488dc16a2a771cc7a688671e5a1fd75e4b017b33daaa9d0bef5570d57cc"} Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.248355 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9t6d4" event={"ID":"b330afef-9be2-4944-b014-0b6b2478316d","Type":"ContainerStarted","Data":"ca4c59e0b4cb469e73be338c19bc6a8ab21380c015647009e238e54796b6e1be"} Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.248378 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9t6d4" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.251872 4858 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-9t6d4 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.251911 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9t6d4" podUID="b330afef-9be2-4944-b014-0b6b2478316d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.253100 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-ssvjj" event={"ID":"f4d39c6c-15e3-48a3-82be-2bc3703dbc7f","Type":"ContainerStarted","Data":"c149cf2a72b20ad8e2d5b2c64bc2085a30eb93bba94f758e8ddbe5de6ac71662"} Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.253139 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-ssvjj" event={"ID":"f4d39c6c-15e3-48a3-82be-2bc3703dbc7f","Type":"ContainerStarted","Data":"871d52082e6e5da1ad73882e4f693b02d03f9083125aaea7e40bd5bc925f8c9c"} Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.253151 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-ssvjj" event={"ID":"f4d39c6c-15e3-48a3-82be-2bc3703dbc7f","Type":"ContainerStarted","Data":"517c2ccac426bf959a8af0376a8e97f0fab29d802428b5641b538cc61bdab95a"} Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.256097 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2c9b\" (UniqueName: \"kubernetes.io/projected/670c2ac6-e01a-4a3b-922b-a8d8aadd693e-kube-api-access-t2c9b\") pod \"service-ca-9c57cc56f-nzlsw\" (UID: \"670c2ac6-e01a-4a3b-922b-a8d8aadd693e\") " pod="openshift-service-ca/service-ca-9c57cc56f-nzlsw" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.258860 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d75bl" event={"ID":"8d6a3975-f77c-4e1d-bc4a-9f34708d2421","Type":"ContainerStarted","Data":"0235fc7047d84b1c52a383d5c9564db06e4e9a6a6130b14a6781848875d409de"} Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.258891 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d75bl" event={"ID":"8d6a3975-f77c-4e1d-bc4a-9f34708d2421","Type":"ContainerStarted","Data":"2471d8ad2691ea8141756d947935df7ab37754af217acd1a94ac129757437f55"} Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.265475 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-frw2d" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.268907 4858 generic.go:334] "Generic (PLEG): container finished" podID="42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0" containerID="262692de71b43954f8725ce82688a070c71ef5bae3ca540d5d9bfddfecc3db99" exitCode=0 Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.269563 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" event={"ID":"42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0","Type":"ContainerDied","Data":"262692de71b43954f8725ce82688a070c71ef5bae3ca540d5d9bfddfecc3db99"} Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.269589 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" event={"ID":"42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0","Type":"ContainerStarted","Data":"c16290e39f657fd1a897695cf7b6454aacea60212c0eebaf8f708974f115a5ed"} Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.273813 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zww4k" event={"ID":"84734edc-960c-4a16-9281-b10a1dc0a710","Type":"ContainerStarted","Data":"1d176a6cf98c50454b705c1a2e73aed7240dfa8d3e91d557caba1f61d6bf3fff"} Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.274080 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zww4k" event={"ID":"84734edc-960c-4a16-9281-b10a1dc0a710","Type":"ContainerStarted","Data":"a4ad7a5ed785515bc25b2b1e07c695b3275a1885dbd834ffbb50bfce914d62dd"} Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.280324 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7wzr\" (UniqueName: \"kubernetes.io/projected/92f95f3e-0d8e-4ba7-a68f-438ca7f8c610-kube-api-access-x7wzr\") pod \"csi-hostpathplugin-xr77f\" (UID: \"92f95f3e-0d8e-4ba7-a68f-438ca7f8c610\") " pod="hostpath-provisioner/csi-hostpathplugin-xr77f" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.291299 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lvvvj" event={"ID":"5109f31b-0e6e-447b-90cc-78ebbc465626","Type":"ContainerStarted","Data":"792260f389071e9654af5ad0c544de7ff17d3ee48f823ac8f6d0b7bc8fc794a8"} Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.298673 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-799bl\" (UniqueName: \"kubernetes.io/projected/df86f4d9-d92b-4b3b-9369-3adad4f3fbd1-kube-api-access-799bl\") pod \"authentication-operator-69f744f599-qvvdf\" (UID: \"df86f4d9-d92b-4b3b-9369-3adad4f3fbd1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-qvvdf" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.299736 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bhw7l" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.300152 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wd5ld" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.304410 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" event={"ID":"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73","Type":"ContainerStarted","Data":"b168df4970b8b3cf896486a9f4e4574cfc47d0f873774be06aa8b8fb1994cdf2"} Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.307263 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hgpp5" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.313135 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-npxd5" event={"ID":"7e918dfc-5224-43da-9b18-19939e269562","Type":"ContainerStarted","Data":"0a83b759f3ca0843ca1a50e17af76441fd4f2f5cfe420a28ccbe232b2d032777"} Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.323246 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf69k\" (UniqueName: \"kubernetes.io/projected/e7977753-e2b1-4818-b536-98b053f60c3b-kube-api-access-pf69k\") pod \"packageserver-d55dfcdfc-nxtrt\" (UID: \"e7977753-e2b1-4818-b536-98b053f60c3b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nxtrt" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.335416 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:22 crc kubenswrapper[4858]: E0202 17:17:22.336516 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:22.83649952 +0000 UTC m=+143.988914785 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.340631 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s69g4\" (UniqueName: \"kubernetes.io/projected/89d9c9f7-5f7c-4cc0-add0-bd38785c308e-kube-api-access-s69g4\") pod \"marketplace-operator-79b997595-nd9bb\" (UID: \"89d9c9f7-5f7c-4cc0-add0-bd38785c308e\") " pod="openshift-marketplace/marketplace-operator-79b997595-nd9bb" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.356447 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbr65\" (UniqueName: \"kubernetes.io/projected/f8ebe30f-bbdd-4bb2-8a0a-3e5ce4972d8f-kube-api-access-zbr65\") pod \"dns-default-q5rqg\" (UID: \"f8ebe30f-bbdd-4bb2-8a0a-3e5ce4972d8f\") " pod="openshift-dns/dns-default-q5rqg" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.371408 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nd9bb" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.374976 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pc8k\" (UniqueName: \"kubernetes.io/projected/f3c72db6-4315-4210-9cfe-3c27b18e4abd-kube-api-access-2pc8k\") pod \"collect-profiles-29500875-tdl29\" (UID: \"f3c72db6-4315-4210-9cfe-3c27b18e4abd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500875-tdl29" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.378652 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nxtrt" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.385680 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-nzlsw" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.390792 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-qvvdf" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.396457 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xbd8\" (UniqueName: \"kubernetes.io/projected/ba109ec6-f5c1-47a3-bb82-7464f3d5508e-kube-api-access-6xbd8\") pod \"machine-config-controller-84d6567774-lfg9z\" (UID: \"ba109ec6-f5c1-47a3-bb82-7464f3d5508e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lfg9z" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.416678 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500875-tdl29" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.419624 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djbk8\" (UniqueName: \"kubernetes.io/projected/b6abb10d-c9ad-4bd7-aa5e-5b07a8c2f68c-kube-api-access-djbk8\") pod \"service-ca-operator-777779d784-jxr6v\" (UID: \"b6abb10d-c9ad-4bd7-aa5e-5b07a8c2f68c\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jxr6v" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.419902 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-q5rqg" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.437172 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:22 crc kubenswrapper[4858]: E0202 17:17:22.437690 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:22.937671671 +0000 UTC m=+144.090086936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.439356 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvb4g\" (UniqueName: \"kubernetes.io/projected/247bf416-2fae-40d0-be7c-c573e55312f6-kube-api-access-lvb4g\") pod \"ingress-canary-wf65d\" (UID: \"247bf416-2fae-40d0-be7c-c573e55312f6\") " pod="openshift-ingress-canary/ingress-canary-wf65d" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.450290 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-xr77f" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.462679 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx784\" (UniqueName: \"kubernetes.io/projected/2c1d5eb0-d864-4444-a3a1-a67447663431-kube-api-access-dx784\") pod \"machine-config-server-l9845\" (UID: \"2c1d5eb0-d864-4444-a3a1-a67447663431\") " pod="openshift-machine-config-operator/machine-config-server-l9845" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.484677 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-ffh76" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.486712 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8xcw"] Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.541411 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:22 crc kubenswrapper[4858]: E0202 17:17:22.541897 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:23.041881032 +0000 UTC m=+144.194296307 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.606332 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-xvhj7"] Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.607819 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-sscgm"] Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.619379 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-9dfzw"] Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.624600 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mj6r9"] Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.628766 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dtf2d"] Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.649139 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-8nr74"] Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.649240 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:22 crc kubenswrapper[4858]: E0202 17:17:22.649563 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:23.149547615 +0000 UTC m=+144.301962880 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.673188 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lfg9z" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.698448 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-l9845" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.709220 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jxr6v" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.727362 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-wf65d" Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.752178 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:22 crc kubenswrapper[4858]: E0202 17:17:22.752510 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:23.252495249 +0000 UTC m=+144.404910514 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:22 crc kubenswrapper[4858]: W0202 17:17:22.773666 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac750cb2_c0b0_4044_82c1_41c0a46748e6.slice/crio-c107f238ec287689b460ef84e4714111d8f6aeedc5870fa68a5f26c33ae2fa0c WatchSource:0}: Error finding container c107f238ec287689b460ef84e4714111d8f6aeedc5870fa68a5f26c33ae2fa0c: Status 404 returned error can't find the container with id c107f238ec287689b460ef84e4714111d8f6aeedc5870fa68a5f26c33ae2fa0c Feb 02 17:17:22 crc kubenswrapper[4858]: W0202 17:17:22.776989 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77955d8c_4a9c_4484_ad86_dbe070fb4451.slice/crio-ad7f96fb91397676603610641008209aac9ba862bd5425c26179a10a11490951 WatchSource:0}: Error finding container ad7f96fb91397676603610641008209aac9ba862bd5425c26179a10a11490951: Status 404 returned error can't find the container with id ad7f96fb91397676603610641008209aac9ba862bd5425c26179a10a11490951 Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.803645 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hgpp5"] Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.853218 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:22 crc kubenswrapper[4858]: E0202 17:17:22.853811 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:23.353789724 +0000 UTC m=+144.506204989 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.911053 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v422r"] Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.929907 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-45snr"] Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.954513 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:22 crc kubenswrapper[4858]: E0202 17:17:22.954785 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:23.454769158 +0000 UTC m=+144.607184423 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.959867 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2sv9"] Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.966180 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6l265"] Feb 02 17:17:22 crc kubenswrapper[4858]: I0202 17:17:22.992557 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wd5ld"] Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.061545 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:23 crc kubenswrapper[4858]: E0202 17:17:23.062045 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:23.562029019 +0000 UTC m=+144.714444284 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.072109 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9t6d4" podStartSLOduration=122.072091087 podStartE2EDuration="2m2.072091087s" podCreationTimestamp="2026-02-02 17:15:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:17:23.071336965 +0000 UTC m=+144.223752260" watchObservedRunningTime="2026-02-02 17:17:23.072091087 +0000 UTC m=+144.224506352" Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.102238 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-nzlsw"] Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.126100 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-rnrz8"] Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.147029 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9n9ld"] Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.162697 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:23 crc kubenswrapper[4858]: E0202 17:17:23.163045 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:23.663030605 +0000 UTC m=+144.815445870 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.196218 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-bhw7l"] Feb 02 17:17:23 crc kubenswrapper[4858]: W0202 17:17:23.244791 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00d93965_f44b_4003_8093_13f33936021e.slice/crio-e83db4cb91c4efe92da817f6e904fa6b68aa9ea5b241d2db7f3b798aa7902718 WatchSource:0}: Error finding container e83db4cb91c4efe92da817f6e904fa6b68aa9ea5b241d2db7f3b798aa7902718: Status 404 returned error can't find the container with id e83db4cb91c4efe92da817f6e904fa6b68aa9ea5b241d2db7f3b798aa7902718 Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.267150 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:23 crc kubenswrapper[4858]: E0202 17:17:23.267488 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:23.767455383 +0000 UTC m=+144.919870648 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.287811 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-zww4k" podStartSLOduration=123.287791594 podStartE2EDuration="2m3.287791594s" podCreationTimestamp="2026-02-02 17:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:17:23.233377665 +0000 UTC m=+144.385792940" watchObservedRunningTime="2026-02-02 17:17:23.287791594 +0000 UTC m=+144.440206849" Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.338689 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nd9bb"] Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.340647 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-nzlsw" event={"ID":"670c2ac6-e01a-4a3b-922b-a8d8aadd693e","Type":"ContainerStarted","Data":"77f58871b243521fb00e087d45400a8c378f92e492c8cc7c98cefc5fb92e212f"} Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.367151 4858 csr.go:261] certificate signing request csr-x5c5q is approved, waiting to be issued Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.368491 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:23 crc kubenswrapper[4858]: E0202 17:17:23.369026 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:23.869003325 +0000 UTC m=+145.021418600 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.373825 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.374077 4858 csr.go:257] certificate signing request csr-x5c5q is issued Feb 02 17:17:23 crc kubenswrapper[4858]: E0202 17:17:23.374398 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:23.874385484 +0000 UTC m=+145.026800749 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.385334 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" event={"ID":"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73","Type":"ContainerStarted","Data":"81071d6892480c064dc70d15445461c76a54bc11e76351541c846445fde3ea9c"} Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.440684 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-npxd5" event={"ID":"7e918dfc-5224-43da-9b18-19939e269562","Type":"ContainerStarted","Data":"e0fbded35f8820c60f8b86d3e2967302555886b75ca024d80765413954528881"} Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.445528 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500875-tdl29"] Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.457398 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rph8" event={"ID":"75b2f0db-60c5-4143-a42d-e62ee5599503","Type":"ContainerStarted","Data":"a1319c4b3f6371184591b9f5fcfec7fbf25ec14845a4e50b0d6601f1e81d4fd7"} Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.463748 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xr77f"] Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.471436 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-sscgm" event={"ID":"e909476f-4cf5-4240-a8bc-51ed96ff8fee","Type":"ContainerStarted","Data":"5c80efc72864cb348cb4619a197f0cfe2ae91ebdb2c6b9b2b198e6a965120f9b"} Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.474499 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.474387 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hgpp5" event={"ID":"e330b41c-dacd-4c4b-a013-dd16a913ac54","Type":"ContainerStarted","Data":"fb8182bd66a4fbf848511e2964eb9c9e13860bc8164249d1ad4983c433214883"} Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.484484 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9dfzw" event={"ID":"77955d8c-4a9c-4484-ad86-dbe070fb4451","Type":"ContainerStarted","Data":"ad7f96fb91397676603610641008209aac9ba862bd5425c26179a10a11490951"} Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.484509 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v422r" event={"ID":"6b927555-7584-46c4-ba20-4899aa734e9e","Type":"ContainerStarted","Data":"0f6b05c79ef366bed6cd76c8106e86c6e050d343a440500065bcbffb472b2fc3"} Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.484519 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-l9845" event={"ID":"2c1d5eb0-d864-4444-a3a1-a67447663431","Type":"ContainerStarted","Data":"1a29d3499b76fa31387ce75768f78ecacf04cce8593f02a0a278b2d4c1b6d99d"} Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.484530 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9n9ld" event={"ID":"dfcd37a1-ff8a-4ed9-9100-3b9b0e1b5c24","Type":"ContainerStarted","Data":"681c93886e8e55e81094359043db752b68a6ad6f49d802e5b8afb6247e99955e"} Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.484557 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-45snr" event={"ID":"60962908-fe41-4333-80b3-ed8bbc9c4fcb","Type":"ContainerStarted","Data":"88c533bb913be02cde27d4f0acda1fb6311d115288d75f0ba530e6fb11a62f15"} Feb 02 17:17:23 crc kubenswrapper[4858]: E0202 17:17:23.484704 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:23.984677435 +0000 UTC m=+145.137092690 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.484866 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:23 crc kubenswrapper[4858]: E0202 17:17:23.485252 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:23.985239522 +0000 UTC m=+145.137654787 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.487110 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2sv9" event={"ID":"e9864123-f9a0-4524-87ad-131c294b1ffe","Type":"ContainerStarted","Data":"33f735f23c4906f47a8688dcfa04feedb0dc34ee2d2c44571a3c8bbe2d586e73"} Feb 02 17:17:23 crc kubenswrapper[4858]: W0202 17:17:23.487842 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89d9c9f7_5f7c_4cc0_add0_bd38785c308e.slice/crio-9af29f0fdae21c6decc648e52100a2f72b468c131ab8987859359f1a30032970 WatchSource:0}: Error finding container 9af29f0fdae21c6decc648e52100a2f72b468c131ab8987859359f1a30032970: Status 404 returned error can't find the container with id 9af29f0fdae21c6decc648e52100a2f72b468c131ab8987859359f1a30032970 Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.488748 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xvhj7" event={"ID":"b68e9688-43c8-4f2e-aafa-8b634249fa5e","Type":"ContainerStarted","Data":"6daa4a20261fdbb1a73c0a4dccd19bcb25876f76b74f230fd4b2ee3de5d29162"} Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.489632 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-8nr74" event={"ID":"ac750cb2-c0b0-4044-82c1-41c0a46748e6","Type":"ContainerStarted","Data":"c107f238ec287689b460ef84e4714111d8f6aeedc5870fa68a5f26c33ae2fa0c"} Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.491101 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dtf2d" event={"ID":"3fd108fb-dfcf-4826-a2e0-8e4877e90f0a","Type":"ContainerStarted","Data":"b209c77ebbf8e1ebc123b3854a280f50a95328cebcb0725995e181a4a9666651"} Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.492367 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bhw7l" event={"ID":"c9d8c712-038d-47f2-931a-e6cf3af58665","Type":"ContainerStarted","Data":"78e9e4f87ea01e7fea12a6864a383d1ea3e59411a5e4025f60c589c453e1022e"} Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.493072 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-frw2d" event={"ID":"4af8047b-d906-4458-84e9-4cbefe269b59","Type":"ContainerStarted","Data":"175e4bb75b73845ffa5d37f0957980f0e6758bf44579de12e1da36d55c5ee685"} Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.493654 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8xcw" event={"ID":"21a5ff9b-0143-4513-8139-84b24e0854af","Type":"ContainerStarted","Data":"8e1f1a8e563e181f055263f9ce726ad28dec6f09251ee4749dd78660fd00d299"} Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.494409 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wd5ld" event={"ID":"00d93965-f44b-4003-8093-13f33936021e","Type":"ContainerStarted","Data":"e83db4cb91c4efe92da817f6e904fa6b68aa9ea5b241d2db7f3b798aa7902718"} Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.496690 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6l265" event={"ID":"5b905a5a-09b5-4cce-a38d-34a92e704c0b","Type":"ContainerStarted","Data":"183858679c30d06a850e1eb5c80fcb45e8fb8b193201b914bc8aad9b206bb68d"} Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.508682 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-jw86f" event={"ID":"d04acd9b-9b86-4119-b327-a3ee6f2690da","Type":"ContainerStarted","Data":"1b784eb80259436a23fb7b6c8371d36f32348f7d6b9150bdf5a4faceffc460f0"} Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.509732 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-rnrz8" event={"ID":"d8edb432-a713-4b13-a42d-510636c08f81","Type":"ContainerStarted","Data":"597a4d8baeaa5b572d6d57b77357e638af30897437663375b7da6251153e2aa9"} Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.513227 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-d75bl" podStartSLOduration=123.513202149 podStartE2EDuration="2m3.513202149s" podCreationTimestamp="2026-02-02 17:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:17:23.507705446 +0000 UTC m=+144.660120711" watchObservedRunningTime="2026-02-02 17:17:23.513202149 +0000 UTC m=+144.665617414" Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.516561 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mj6r9" event={"ID":"91c97885-896e-4947-907f-acb0a86ce947","Type":"ContainerStarted","Data":"e6a2772f32892022f1ed467f8a7b866f133abe78d3acd6313b24ab263c2c647e"} Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.528744 4858 generic.go:334] "Generic (PLEG): container finished" podID="5109f31b-0e6e-447b-90cc-78ebbc465626" containerID="156a33f447aa8459f4a8ef19c53b17aa21d96fb84a32fc784090aceae4b8ef7a" exitCode=0 Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.529498 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lvvvj" event={"ID":"5109f31b-0e6e-447b-90cc-78ebbc465626","Type":"ContainerDied","Data":"156a33f447aa8459f4a8ef19c53b17aa21d96fb84a32fc784090aceae4b8ef7a"} Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.530158 4858 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-9t6d4 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.530195 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9t6d4" podUID="b330afef-9be2-4944-b014-0b6b2478316d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.530914 4858 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-7c9rp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.531297 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-7c9rp" podUID="0b36dd0a-26c9-4d5f-ae02-aa432af223ad" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.558374 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nxtrt"] Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.588754 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:23 crc kubenswrapper[4858]: E0202 17:17:23.591824 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:24.091801322 +0000 UTC m=+145.244216587 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:23 crc kubenswrapper[4858]: W0202 17:17:23.622962 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7977753_e2b1_4818_b536_98b053f60c3b.slice/crio-71c170fa5b577050e3599012fd223ae298eab4a9ea4f0b7a7f1c1e367923e856 WatchSource:0}: Error finding container 71c170fa5b577050e3599012fd223ae298eab4a9ea4f0b7a7f1c1e367923e856: Status 404 returned error can't find the container with id 71c170fa5b577050e3599012fd223ae298eab4a9ea4f0b7a7f1c1e367923e856 Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.669284 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-ssvjj" podStartSLOduration=122.669254872 podStartE2EDuration="2m2.669254872s" podCreationTimestamp="2026-02-02 17:15:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:17:23.66646424 +0000 UTC m=+144.818879515" watchObservedRunningTime="2026-02-02 17:17:23.669254872 +0000 UTC m=+144.821670157" Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.690513 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:23 crc kubenswrapper[4858]: E0202 17:17:23.693378 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:24.193308144 +0000 UTC m=+145.345723409 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.738035 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lfg9z"] Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.740911 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-qvvdf"] Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.796947 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:23 crc kubenswrapper[4858]: E0202 17:17:23.798110 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:24.298076891 +0000 UTC m=+145.450492156 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.816647 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-q5rqg"] Feb 02 17:17:23 crc kubenswrapper[4858]: W0202 17:17:23.817341 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba109ec6_f5c1_47a3_bb82_7464f3d5508e.slice/crio-e56cafe8955d9ef3f3478db560bce609a037e5c00f50b903856300e0b55d214c WatchSource:0}: Error finding container e56cafe8955d9ef3f3478db560bce609a037e5c00f50b903856300e0b55d214c: Status 404 returned error can't find the container with id e56cafe8955d9ef3f3478db560bce609a037e5c00f50b903856300e0b55d214c Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.827959 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-jxr6v"] Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.838086 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-ffh76"] Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.838154 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-wf65d"] Feb 02 17:17:23 crc kubenswrapper[4858]: W0202 17:17:23.863418 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf86f4d9_d92b_4b3b_9369_3adad4f3fbd1.slice/crio-878bc1464141a900674791717452899b881f5bc8271a0ac756eca9cc0670c234 WatchSource:0}: Error finding container 878bc1464141a900674791717452899b881f5bc8271a0ac756eca9cc0670c234: Status 404 returned error can't find the container with id 878bc1464141a900674791717452899b881f5bc8271a0ac756eca9cc0670c234 Feb 02 17:17:23 crc kubenswrapper[4858]: I0202 17:17:23.898785 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:23 crc kubenswrapper[4858]: E0202 17:17:23.899200 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:24.39918791 +0000 UTC m=+145.551603175 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:23 crc kubenswrapper[4858]: W0202 17:17:23.944253 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6abb10d_c9ad_4bd7_aa5e_5b07a8c2f68c.slice/crio-ca7b639b07a2e68e52fc947e6f26e00ff4821122813013d798f4b7de7b7db080 WatchSource:0}: Error finding container ca7b639b07a2e68e52fc947e6f26e00ff4821122813013d798f4b7de7b7db080: Status 404 returned error can't find the container with id ca7b639b07a2e68e52fc947e6f26e00ff4821122813013d798f4b7de7b7db080 Feb 02 17:17:23 crc kubenswrapper[4858]: W0202 17:17:23.946266 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod247bf416_2fae_40d0_be7c_c573e55312f6.slice/crio-2b2fc51e16e6a420c3090d5b529fbc96c5116ea84f78295a42c543c7392a168f WatchSource:0}: Error finding container 2b2fc51e16e6a420c3090d5b529fbc96c5116ea84f78295a42c543c7392a168f: Status 404 returned error can't find the container with id 2b2fc51e16e6a420c3090d5b529fbc96c5116ea84f78295a42c543c7392a168f Feb 02 17:17:23 crc kubenswrapper[4858]: W0202 17:17:23.973743 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd740a32_9003_4d27_8c7b_3423717fd9bf.slice/crio-b591ed7e732466dd7e2168b8f070792979cb2ee6ebb3ac88e7d2959eff0c9b5a WatchSource:0}: Error finding container b591ed7e732466dd7e2168b8f070792979cb2ee6ebb3ac88e7d2959eff0c9b5a: Status 404 returned error can't find the container with id b591ed7e732466dd7e2168b8f070792979cb2ee6ebb3ac88e7d2959eff0c9b5a Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.000407 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:24 crc kubenswrapper[4858]: E0202 17:17:24.002079 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:24.502056622 +0000 UTC m=+145.654471887 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:24 crc kubenswrapper[4858]: W0202 17:17:24.008616 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8ebe30f_bbdd_4bb2_8a0a_3e5ce4972d8f.slice/crio-61e56f46f833866a06ccc797fb34626ea1f6a46553aa473f70cde57ea24a7a4f WatchSource:0}: Error finding container 61e56f46f833866a06ccc797fb34626ea1f6a46553aa473f70cde57ea24a7a4f: Status 404 returned error can't find the container with id 61e56f46f833866a06ccc797fb34626ea1f6a46553aa473f70cde57ea24a7a4f Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.031400 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-7c9rp" podStartSLOduration=124.031381329 podStartE2EDuration="2m4.031381329s" podCreationTimestamp="2026-02-02 17:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:17:24.029504043 +0000 UTC m=+145.181919308" watchObservedRunningTime="2026-02-02 17:17:24.031381329 +0000 UTC m=+145.183796594" Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.068963 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zq59f" podStartSLOduration=125.0689478 podStartE2EDuration="2m5.0689478s" podCreationTimestamp="2026-02-02 17:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:17:24.067317691 +0000 UTC m=+145.219732956" watchObservedRunningTime="2026-02-02 17:17:24.0689478 +0000 UTC m=+145.221363065" Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.102546 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:24 crc kubenswrapper[4858]: E0202 17:17:24.102998 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:24.602966105 +0000 UTC m=+145.755381370 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.203619 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:24 crc kubenswrapper[4858]: E0202 17:17:24.204103 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:24.704084615 +0000 UTC m=+145.856499900 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.242433 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dtf2d" podStartSLOduration=123.242415278 podStartE2EDuration="2m3.242415278s" podCreationTimestamp="2026-02-02 17:15:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:17:24.242395818 +0000 UTC m=+145.394811083" watchObservedRunningTime="2026-02-02 17:17:24.242415278 +0000 UTC m=+145.394830543" Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.307296 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:24 crc kubenswrapper[4858]: E0202 17:17:24.307576 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:24.807562894 +0000 UTC m=+145.959978159 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.376658 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-02 17:12:23 +0000 UTC, rotation deadline is 2026-12-23 22:42:55.418276461 +0000 UTC Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.376694 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7781h25m31.041585033s for next certificate rotation Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.410447 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:24 crc kubenswrapper[4858]: E0202 17:17:24.410799 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:24.910784266 +0000 UTC m=+146.063199531 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.512040 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:24 crc kubenswrapper[4858]: E0202 17:17:24.512639 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:25.012628057 +0000 UTC m=+146.165043312 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.546894 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nxtrt" event={"ID":"e7977753-e2b1-4818-b536-98b053f60c3b","Type":"ContainerStarted","Data":"71c170fa5b577050e3599012fd223ae298eab4a9ea4f0b7a7f1c1e367923e856"} Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.550672 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9n9ld" event={"ID":"dfcd37a1-ff8a-4ed9-9100-3b9b0e1b5c24","Type":"ContainerStarted","Data":"3829cdd6a997d200c16d2ecaaf52f31e392ddfafa41e3971c57c7e4fabe462c8"} Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.553858 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9dfzw" event={"ID":"77955d8c-4a9c-4484-ad86-dbe070fb4451","Type":"ContainerStarted","Data":"58cc98accf67e209e5013dc3e8add59ae6c2ed5f7920a763ecaf856ebb30c5b1"} Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.571314 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-npxd5" event={"ID":"7e918dfc-5224-43da-9b18-19939e269562","Type":"ContainerStarted","Data":"aa2e37c2707be3c49b78d1097e11177db85ba0a7230eb6e4161fc2f2b0b54c3e"} Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.574822 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dtf2d" event={"ID":"3fd108fb-dfcf-4826-a2e0-8e4877e90f0a","Type":"ContainerStarted","Data":"266123e1df87d99b2f7b87ceb85e06c3c1bc2685d63b649c24f6bf4aaf55ad0f"} Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.576248 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9n9ld" podStartSLOduration=124.576237008 podStartE2EDuration="2m4.576237008s" podCreationTimestamp="2026-02-02 17:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:17:24.575715542 +0000 UTC m=+145.728130807" watchObservedRunningTime="2026-02-02 17:17:24.576237008 +0000 UTC m=+145.728652273" Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.578676 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-ffh76" event={"ID":"fd740a32-9003-4d27-8c7b-3423717fd9bf","Type":"ContainerStarted","Data":"b591ed7e732466dd7e2168b8f070792979cb2ee6ebb3ac88e7d2959eff0c9b5a"} Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.590227 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2sv9" event={"ID":"e9864123-f9a0-4524-87ad-131c294b1ffe","Type":"ContainerStarted","Data":"315f62782a9d583e8427822dda32f87718e9377b87855404ee715a814c73d815"} Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.598184 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v422r" event={"ID":"6b927555-7584-46c4-ba20-4899aa734e9e","Type":"ContainerStarted","Data":"a70383b677250b4f92360ebd84589dd34873a721ac55d25d90d437dbed999668"} Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.598545 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v422r" Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.605293 4858 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-v422r container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.605333 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v422r" podUID="6b927555-7584-46c4-ba20-4899aa734e9e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.606366 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xvhj7" event={"ID":"b68e9688-43c8-4f2e-aafa-8b634249fa5e","Type":"ContainerStarted","Data":"5b9c32a6510fac7fa684db097f9bf4d2585716ed805caa2c226fc2c347d73d19"} Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.606392 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xvhj7" event={"ID":"b68e9688-43c8-4f2e-aafa-8b634249fa5e","Type":"ContainerStarted","Data":"a10a9a6c8fa9b4a943c39e794f79269df756c46ef79ccb4d77eb8d892058b609"} Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.611788 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xr77f" event={"ID":"92f95f3e-0d8e-4ba7-a68f-438ca7f8c610","Type":"ContainerStarted","Data":"d5a417f97c403499c99b2e9fa9fa4f154ece915a517936c8d9992dd568746138"} Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.612595 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.613952 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-npxd5" podStartSLOduration=124.613942233 podStartE2EDuration="2m4.613942233s" podCreationTimestamp="2026-02-02 17:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:17:24.595515488 +0000 UTC m=+145.747930753" watchObservedRunningTime="2026-02-02 17:17:24.613942233 +0000 UTC m=+145.766357498" Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.614214 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-sscgm" event={"ID":"e909476f-4cf5-4240-a8bc-51ed96ff8fee","Type":"ContainerStarted","Data":"172eb270e993d531845221ea13f4adf01b6d63a315ca15048c168dab6fa7019b"} Feb 02 17:17:24 crc kubenswrapper[4858]: E0202 17:17:24.615436 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:25.115425076 +0000 UTC m=+146.267840341 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.615620 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2sv9" podStartSLOduration=124.615613802 podStartE2EDuration="2m4.615613802s" podCreationTimestamp="2026-02-02 17:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:17:24.615005094 +0000 UTC m=+145.767420369" watchObservedRunningTime="2026-02-02 17:17:24.615613802 +0000 UTC m=+145.768029067" Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.615707 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-8nr74" event={"ID":"ac750cb2-c0b0-4044-82c1-41c0a46748e6","Type":"ContainerStarted","Data":"9b6e13798eead5dc891bcf66a5b0dd4874a52bf5c4f29414476e8508d84ad869"} Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.616649 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-qvvdf" event={"ID":"df86f4d9-d92b-4b3b-9369-3adad4f3fbd1","Type":"ContainerStarted","Data":"878bc1464141a900674791717452899b881f5bc8271a0ac756eca9cc0670c234"} Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.634810 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8xcw" event={"ID":"21a5ff9b-0143-4513-8139-84b24e0854af","Type":"ContainerStarted","Data":"cee450fb5235ece04585f2b333163b218c44ee67aa159e3c027eab65a5206482"} Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.634850 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8xcw" Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.661642 4858 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-g8xcw container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.661903 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8xcw" podUID="21a5ff9b-0143-4513-8139-84b24e0854af" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.688709 4858 generic.go:334] "Generic (PLEG): container finished" podID="60962908-fe41-4333-80b3-ed8bbc9c4fcb" containerID="52bd517af50616b93c9317963a9de7d1ab20afe19125aba94d31a252426d91ea" exitCode=0 Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.688794 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-45snr" event={"ID":"60962908-fe41-4333-80b3-ed8bbc9c4fcb","Type":"ContainerDied","Data":"52bd517af50616b93c9317963a9de7d1ab20afe19125aba94d31a252426d91ea"} Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.713839 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:24 crc kubenswrapper[4858]: E0202 17:17:24.715894 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:25.215879926 +0000 UTC m=+146.368295191 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.717507 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500875-tdl29" event={"ID":"f3c72db6-4315-4210-9cfe-3c27b18e4abd","Type":"ContainerStarted","Data":"2d2736426dc9b8cf377bc45320176e344e9a75b7b04efaf3a097fdd13f77bb21"} Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.729035 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500875-tdl29" event={"ID":"f3c72db6-4315-4210-9cfe-3c27b18e4abd","Type":"ContainerStarted","Data":"070dc8ddd33570bc590a1dac8c59ba6f0c7b38d01be2abf405a67c12b78589b4"} Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.750286 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-jw86f" event={"ID":"d04acd9b-9b86-4119-b327-a3ee6f2690da","Type":"ContainerStarted","Data":"5c8ff77283f90188a5687de82e399e680fade0f3832005dd467550226bfa40aa"} Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.760147 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v422r" podStartSLOduration=123.760129125 podStartE2EDuration="2m3.760129125s" podCreationTimestamp="2026-02-02 17:15:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:17:24.759363792 +0000 UTC m=+145.911779057" watchObservedRunningTime="2026-02-02 17:17:24.760129125 +0000 UTC m=+145.912544390" Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.761690 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-xvhj7" podStartSLOduration=123.761682921 podStartE2EDuration="2m3.761682921s" podCreationTimestamp="2026-02-02 17:15:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:17:24.654399129 +0000 UTC m=+145.806814394" watchObservedRunningTime="2026-02-02 17:17:24.761682921 +0000 UTC m=+145.914098186" Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.787611 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-q5rqg" event={"ID":"f8ebe30f-bbdd-4bb2-8a0a-3e5ce4972d8f","Type":"ContainerStarted","Data":"61e56f46f833866a06ccc797fb34626ea1f6a46553aa473f70cde57ea24a7a4f"} Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.791482 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hgpp5" event={"ID":"e330b41c-dacd-4c4b-a013-dd16a913ac54","Type":"ContainerStarted","Data":"7a41be7160dfac9ec601e25327244d1c9a5489a6ab2e2e42180d27ef02de0b20"} Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.814901 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6l265" event={"ID":"5b905a5a-09b5-4cce-a38d-34a92e704c0b","Type":"ContainerStarted","Data":"deb498ef9a6dffed3328f718d2bcab5c02098059b94ecee647c006b52f56451f"} Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.832530 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:24 crc kubenswrapper[4858]: E0202 17:17:24.833494 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:25.333477983 +0000 UTC m=+146.485893248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.849855 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nd9bb" event={"ID":"89d9c9f7-5f7c-4cc0-add0-bd38785c308e","Type":"ContainerStarted","Data":"9af29f0fdae21c6decc648e52100a2f72b468c131ab8987859359f1a30032970"} Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.853479 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-nd9bb" Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.862321 4858 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-nd9bb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.862386 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-nd9bb" podUID="89d9c9f7-5f7c-4cc0-add0-bd38785c308e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.862654 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-frw2d" event={"ID":"4af8047b-d906-4458-84e9-4cbefe269b59","Type":"ContainerStarted","Data":"ba42e414cdd74b41ba2d2dcfd0bfeb33b1efa97e1a0731a08301ed9589c049f8"} Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.872446 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-nzlsw" event={"ID":"670c2ac6-e01a-4a3b-922b-a8d8aadd693e","Type":"ContainerStarted","Data":"fabfd3ac2cb4c8d135eb90619415490b2567b8f812f7e9907719933bef991b7a"} Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.884957 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jxr6v" event={"ID":"b6abb10d-c9ad-4bd7-aa5e-5b07a8c2f68c","Type":"ContainerStarted","Data":"ca7b639b07a2e68e52fc947e6f26e00ff4821122813013d798f4b7de7b7db080"} Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.891808 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-sscgm" podStartSLOduration=124.891791607 podStartE2EDuration="2m4.891791607s" podCreationTimestamp="2026-02-02 17:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:17:24.852411183 +0000 UTC m=+146.004826448" watchObservedRunningTime="2026-02-02 17:17:24.891791607 +0000 UTC m=+146.044206872" Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.900644 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-wf65d" event={"ID":"247bf416-2fae-40d0-be7c-c573e55312f6","Type":"ContainerStarted","Data":"2b2fc51e16e6a420c3090d5b529fbc96c5116ea84f78295a42c543c7392a168f"} Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.903488 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" event={"ID":"42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0","Type":"ContainerStarted","Data":"f832d46a08a04df0f05f83f9f37c827591b187dff813ae8373e24baf88c1dfc5"} Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.903531 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" event={"ID":"42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0","Type":"ContainerStarted","Data":"4b368b75fd8985c5f07988446f2249f58aa13581cb0ebc8b7976fab5e8e0dd5a"} Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.920854 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hgpp5" podStartSLOduration=123.920837906 podStartE2EDuration="2m3.920837906s" podCreationTimestamp="2026-02-02 17:15:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:17:24.920347382 +0000 UTC m=+146.072762647" watchObservedRunningTime="2026-02-02 17:17:24.920837906 +0000 UTC m=+146.073253171" Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.933729 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.938164 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29500875-tdl29" podStartSLOduration=125.938149008 podStartE2EDuration="2m5.938149008s" podCreationTimestamp="2026-02-02 17:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:17:24.938110917 +0000 UTC m=+146.090526182" watchObservedRunningTime="2026-02-02 17:17:24.938149008 +0000 UTC m=+146.090564263" Feb 02 17:17:24 crc kubenswrapper[4858]: E0202 17:17:24.938492 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:25.438474518 +0000 UTC m=+146.590889853 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.939704 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rph8" event={"ID":"75b2f0db-60c5-4143-a42d-e62ee5599503","Type":"ContainerStarted","Data":"bebd0316065291d1b2070178ef07fc0f8afd72ec98779bbdd7fa10f3b2ec426f"} Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.959989 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mj6r9" event={"ID":"91c97885-896e-4947-907f-acb0a86ce947","Type":"ContainerStarted","Data":"d3f1aeaf79c7dd2baa8e2bbb207c8456696673b69d43bd742a9bf118622bb0aa"} Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.978960 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8xcw" podStartSLOduration=123.978939534 podStartE2EDuration="2m3.978939534s" podCreationTimestamp="2026-02-02 17:15:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:17:24.964058714 +0000 UTC m=+146.116473969" watchObservedRunningTime="2026-02-02 17:17:24.978939534 +0000 UTC m=+146.131354799" Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.991537 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lfg9z" event={"ID":"ba109ec6-f5c1-47a3-bb82-7464f3d5508e","Type":"ContainerStarted","Data":"e56cafe8955d9ef3f3478db560bce609a037e5c00f50b903856300e0b55d214c"} Feb 02 17:17:24 crc kubenswrapper[4858]: I0202 17:17:24.991836 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:25 crc kubenswrapper[4858]: I0202 17:17:25.040332 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:25 crc kubenswrapper[4858]: E0202 17:17:25.042305 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:25.542283877 +0000 UTC m=+146.694699142 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:25 crc kubenswrapper[4858]: I0202 17:17:25.064820 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6l265" podStartSLOduration=125.064804133 podStartE2EDuration="2m5.064804133s" podCreationTimestamp="2026-02-02 17:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:17:25.006785897 +0000 UTC m=+146.159201162" watchObservedRunningTime="2026-02-02 17:17:25.064804133 +0000 UTC m=+146.217219398" Feb 02 17:17:25 crc kubenswrapper[4858]: I0202 17:17:25.065087 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-nd9bb" podStartSLOduration=124.065083391 podStartE2EDuration="2m4.065083391s" podCreationTimestamp="2026-02-02 17:15:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:17:25.063916556 +0000 UTC m=+146.216331821" watchObservedRunningTime="2026-02-02 17:17:25.065083391 +0000 UTC m=+146.217498656" Feb 02 17:17:25 crc kubenswrapper[4858]: I0202 17:17:25.108194 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-frw2d" podStartSLOduration=125.108146604 podStartE2EDuration="2m5.108146604s" podCreationTimestamp="2026-02-02 17:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:17:25.094331736 +0000 UTC m=+146.246747001" watchObservedRunningTime="2026-02-02 17:17:25.108146604 +0000 UTC m=+146.260561869" Feb 02 17:17:25 crc kubenswrapper[4858]: I0202 17:17:25.135650 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4rph8" podStartSLOduration=125.135628807 podStartE2EDuration="2m5.135628807s" podCreationTimestamp="2026-02-02 17:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:17:25.128467545 +0000 UTC m=+146.280882810" watchObservedRunningTime="2026-02-02 17:17:25.135628807 +0000 UTC m=+146.288044072" Feb 02 17:17:25 crc kubenswrapper[4858]: I0202 17:17:25.143104 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:25 crc kubenswrapper[4858]: E0202 17:17:25.143413 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:25.643400146 +0000 UTC m=+146.795815411 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:25 crc kubenswrapper[4858]: I0202 17:17:25.182548 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-nzlsw" podStartSLOduration=124.182523253 podStartE2EDuration="2m4.182523253s" podCreationTimestamp="2026-02-02 17:15:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:17:25.172909779 +0000 UTC m=+146.325325044" watchObservedRunningTime="2026-02-02 17:17:25.182523253 +0000 UTC m=+146.334938518" Feb 02 17:17:25 crc kubenswrapper[4858]: I0202 17:17:25.213502 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" podStartSLOduration=125.213481608 podStartE2EDuration="2m5.213481608s" podCreationTimestamp="2026-02-02 17:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:17:25.211091198 +0000 UTC m=+146.363506473" watchObservedRunningTime="2026-02-02 17:17:25.213481608 +0000 UTC m=+146.365896873" Feb 02 17:17:25 crc kubenswrapper[4858]: I0202 17:17:25.240780 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" podStartSLOduration=125.240765375 podStartE2EDuration="2m5.240765375s" podCreationTimestamp="2026-02-02 17:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:17:25.240368243 +0000 UTC m=+146.392783508" watchObservedRunningTime="2026-02-02 17:17:25.240765375 +0000 UTC m=+146.393180640" Feb 02 17:17:25 crc kubenswrapper[4858]: I0202 17:17:25.249407 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:25 crc kubenswrapper[4858]: E0202 17:17:25.249791 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:25.749776801 +0000 UTC m=+146.902192066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:25 crc kubenswrapper[4858]: I0202 17:17:25.271220 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-frw2d" Feb 02 17:17:25 crc kubenswrapper[4858]: I0202 17:17:25.283172 4858 patch_prober.go:28] interesting pod/router-default-5444994796-frw2d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 17:17:25 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Feb 02 17:17:25 crc kubenswrapper[4858]: [+]process-running ok Feb 02 17:17:25 crc kubenswrapper[4858]: healthz check failed Feb 02 17:17:25 crc kubenswrapper[4858]: I0202 17:17:25.283232 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-frw2d" podUID="4af8047b-d906-4458-84e9-4cbefe269b59" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 17:17:25 crc kubenswrapper[4858]: I0202 17:17:25.351026 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:25 crc kubenswrapper[4858]: E0202 17:17:25.351392 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:25.851380695 +0000 UTC m=+147.003795960 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:25 crc kubenswrapper[4858]: I0202 17:17:25.451754 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:25 crc kubenswrapper[4858]: E0202 17:17:25.451888 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:25.951864826 +0000 UTC m=+147.104280091 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:25 crc kubenswrapper[4858]: I0202 17:17:25.453444 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:25 crc kubenswrapper[4858]: E0202 17:17:25.453813 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:25.953801574 +0000 UTC m=+147.106216839 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:25 crc kubenswrapper[4858]: I0202 17:17:25.514570 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:17:25 crc kubenswrapper[4858]: I0202 17:17:25.558427 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:25 crc kubenswrapper[4858]: E0202 17:17:25.558849 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:26.058834619 +0000 UTC m=+147.211249884 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:25 crc kubenswrapper[4858]: I0202 17:17:25.659854 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:25 crc kubenswrapper[4858]: E0202 17:17:25.660234 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:26.160217466 +0000 UTC m=+147.312632731 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:25 crc kubenswrapper[4858]: I0202 17:17:25.761258 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:25 crc kubenswrapper[4858]: E0202 17:17:25.761421 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:26.261394018 +0000 UTC m=+147.413809283 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:25 crc kubenswrapper[4858]: I0202 17:17:25.761818 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:25 crc kubenswrapper[4858]: E0202 17:17:25.762154 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:26.26214624 +0000 UTC m=+147.414561505 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:25 crc kubenswrapper[4858]: I0202 17:17:25.863525 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:25 crc kubenswrapper[4858]: E0202 17:17:25.863906 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:26.363892758 +0000 UTC m=+147.516308023 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:25 crc kubenswrapper[4858]: I0202 17:17:25.964692 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:25 crc kubenswrapper[4858]: E0202 17:17:25.965087 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:26.4650711 +0000 UTC m=+147.617486365 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:25 crc kubenswrapper[4858]: I0202 17:17:25.997833 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-45snr" event={"ID":"60962908-fe41-4333-80b3-ed8bbc9c4fcb","Type":"ContainerStarted","Data":"f0b5424b7978fb49bbf17dc6a81e4b543500f82eb35afc95162d1f2da229a501"} Feb 02 17:17:25 crc kubenswrapper[4858]: I0202 17:17:25.998308 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-45snr" Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.000071 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-q5rqg" event={"ID":"f8ebe30f-bbdd-4bb2-8a0a-3e5ce4972d8f","Type":"ContainerStarted","Data":"65de94263d9eff89e57fbb79632e9183b46d52565c3db10287f0b007be07996b"} Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.000094 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-q5rqg" event={"ID":"f8ebe30f-bbdd-4bb2-8a0a-3e5ce4972d8f","Type":"ContainerStarted","Data":"7fecd6c770b56aaee9c5db3d05ab15c89d9499d43886337e03c480b29043fed6"} Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.000442 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-q5rqg" Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.001471 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-ffh76" event={"ID":"fd740a32-9003-4d27-8c7b-3423717fd9bf","Type":"ContainerStarted","Data":"8a0dbbecd730a817039478152cb6477c93df12a304061712378aad0744deaf11"} Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.002066 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-ffh76" Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.003613 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-ffh76 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.003650 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ffh76" podUID="fd740a32-9003-4d27-8c7b-3423717fd9bf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.004270 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-rnrz8" event={"ID":"d8edb432-a713-4b13-a42d-510636c08f81","Type":"ContainerStarted","Data":"cd1ea7e0155063358d82c3390ea908b90ad6557a4abe55ae3301aae45d9207f4"} Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.004444 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-rnrz8" Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.005441 4858 patch_prober.go:28] interesting pod/console-operator-58897d9998-rnrz8 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/readyz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.005479 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-rnrz8" podUID="d8edb432-a713-4b13-a42d-510636c08f81" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/readyz\": dial tcp 10.217.0.16:8443: connect: connection refused" Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.006501 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-l9845" event={"ID":"2c1d5eb0-d864-4444-a3a1-a67447663431","Type":"ContainerStarted","Data":"1880d4984885efa5c0918ccb256cbfb8a5b275d5a9dac3746eca351cb3137411"} Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.008372 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nd9bb" event={"ID":"89d9c9f7-5f7c-4cc0-add0-bd38785c308e","Type":"ContainerStarted","Data":"32ef38fdee33da228060ef7ba5027031471564e5ad48791d60f8595e4e66a3e1"} Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.009069 4858 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-nd9bb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.009100 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-nd9bb" podUID="89d9c9f7-5f7c-4cc0-add0-bd38785c308e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.011013 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bhw7l" event={"ID":"c9d8c712-038d-47f2-931a-e6cf3af58665","Type":"ContainerStarted","Data":"6acad8705e1152cd42ba3eaa4da855cdcdb8af78fdb27cb922fdb245b9ec7547"} Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.011057 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bhw7l" event={"ID":"c9d8c712-038d-47f2-931a-e6cf3af58665","Type":"ContainerStarted","Data":"2983d400323b16828885b7771bbaf7f1d3203d1ec29ed94dbeb14cac49a41eea"} Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.012191 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-jw86f" event={"ID":"d04acd9b-9b86-4119-b327-a3ee6f2690da","Type":"ContainerStarted","Data":"be3dfffe3140f08013a6f4fd3d1155f07d61c73f7dca37029160153624b260c0"} Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.013492 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nxtrt" event={"ID":"e7977753-e2b1-4818-b536-98b053f60c3b","Type":"ContainerStarted","Data":"dae23e3825da82bb33bf8507c98aa0acf90b5316f3f5483c9ffbde334e02a14f"} Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.013752 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nxtrt" Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.017077 4858 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-nxtrt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:5443/healthz\": dial tcp 10.217.0.29:5443: connect: connection refused" start-of-body= Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.017122 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nxtrt" podUID="e7977753-e2b1-4818-b536-98b053f60c3b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.29:5443/healthz\": dial tcp 10.217.0.29:5443: connect: connection refused" Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.017946 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9dfzw" event={"ID":"77955d8c-4a9c-4484-ad86-dbe070fb4451","Type":"ContainerStarted","Data":"b27a6b8e6d4c98a659ccf88cda20275675a0d48855e0acf02642a780df486c59"} Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.027545 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lfg9z" event={"ID":"ba109ec6-f5c1-47a3-bb82-7464f3d5508e","Type":"ContainerStarted","Data":"fa9a1c003708cbbf038de11d84633a79c9f4fc2f35d5fa8ce6ceebb013563d02"} Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.027589 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lfg9z" event={"ID":"ba109ec6-f5c1-47a3-bb82-7464f3d5508e","Type":"ContainerStarted","Data":"d04a6b848672580532984d3fc074e94a4b0d1de650205b2210704222e33664cd"} Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.030198 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jxr6v" event={"ID":"b6abb10d-c9ad-4bd7-aa5e-5b07a8c2f68c","Type":"ContainerStarted","Data":"92ede6540a4ed93b79c53a1478d78f806e2c473caf441fb5de436725dcf6e01b"} Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.034898 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-8nr74" event={"ID":"ac750cb2-c0b0-4044-82c1-41c0a46748e6","Type":"ContainerStarted","Data":"488a5a8f941b2d0bc13f8b6e67baf2265033c7429a80b3e6802ccda2447baf36"} Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.040467 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-qvvdf" event={"ID":"df86f4d9-d92b-4b3b-9369-3adad4f3fbd1","Type":"ContainerStarted","Data":"9c44558b503aaa80575fe0083eae07c31f3d41af4beee02e5867f6d6bdfc036b"} Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.044722 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wd5ld" event={"ID":"00d93965-f44b-4003-8093-13f33936021e","Type":"ContainerStarted","Data":"da3e4913b9dbeed7abd2926dd78e82d52734130771ffc26eb796933026f5e600"} Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.045548 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-45snr" podStartSLOduration=126.045531868 podStartE2EDuration="2m6.045531868s" podCreationTimestamp="2026-02-02 17:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:17:26.043202039 +0000 UTC m=+147.195617304" watchObservedRunningTime="2026-02-02 17:17:26.045531868 +0000 UTC m=+147.197947133" Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.049858 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xr77f" event={"ID":"92f95f3e-0d8e-4ba7-a68f-438ca7f8c610","Type":"ContainerStarted","Data":"b653acec968dfb7b4d657a5e120d03fb1a571f49d569f925a399c36f7cf40866"} Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.055226 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mj6r9" event={"ID":"91c97885-896e-4947-907f-acb0a86ce947","Type":"ContainerStarted","Data":"cefce181a071283c329b702d4045fb99ee38ccf42f7fa1b6c14825f27820d2bc"} Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.055810 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mj6r9" Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.060660 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lvvvj" event={"ID":"5109f31b-0e6e-447b-90cc-78ebbc465626","Type":"ContainerStarted","Data":"1614a0dc12ec1ae91a904648a8442da4976959d9c5e388aace3066ce40902969"} Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.066591 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.067352 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-wf65d" event={"ID":"247bf416-2fae-40d0-be7c-c573e55312f6","Type":"ContainerStarted","Data":"e725cc71299ceccea1de931c02db3c98faf0e984a736fd4b8bf00e358c7f2c4e"} Feb 02 17:17:26 crc kubenswrapper[4858]: E0202 17:17:26.067870 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:26.567854988 +0000 UTC m=+147.720270253 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.083296 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v422r" Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.126909 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-8nr74" podStartSLOduration=125.126886894 podStartE2EDuration="2m5.126886894s" podCreationTimestamp="2026-02-02 17:15:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:17:26.096363671 +0000 UTC m=+147.248778946" watchObservedRunningTime="2026-02-02 17:17:26.126886894 +0000 UTC m=+147.279302169" Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.142986 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8xcw" Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.171485 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-qvvdf" podStartSLOduration=126.171468432 podStartE2EDuration="2m6.171468432s" podCreationTimestamp="2026-02-02 17:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:17:26.129921333 +0000 UTC m=+147.282336598" watchObservedRunningTime="2026-02-02 17:17:26.171468432 +0000 UTC m=+147.323883697" Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.175289 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:26 crc kubenswrapper[4858]: E0202 17:17:26.194835 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:26.694821072 +0000 UTC m=+147.847236337 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.209327 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-jw86f" podStartSLOduration=126.209308921 podStartE2EDuration="2m6.209308921s" podCreationTimestamp="2026-02-02 17:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:17:26.17209626 +0000 UTC m=+147.324511525" watchObservedRunningTime="2026-02-02 17:17:26.209308921 +0000 UTC m=+147.361724186" Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.211127 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-l9845" podStartSLOduration=7.211120334 podStartE2EDuration="7.211120334s" podCreationTimestamp="2026-02-02 17:17:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:17:26.209026432 +0000 UTC m=+147.361441697" watchObservedRunningTime="2026-02-02 17:17:26.211120334 +0000 UTC m=+147.363535599" Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.235426 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-ffh76" podStartSLOduration=126.235403492 podStartE2EDuration="2m6.235403492s" podCreationTimestamp="2026-02-02 17:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:17:26.231773805 +0000 UTC m=+147.384189060" watchObservedRunningTime="2026-02-02 17:17:26.235403492 +0000 UTC m=+147.387818757" Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.259992 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-rnrz8" podStartSLOduration=126.259964338 podStartE2EDuration="2m6.259964338s" podCreationTimestamp="2026-02-02 17:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:17:26.25833011 +0000 UTC m=+147.410745375" watchObservedRunningTime="2026-02-02 17:17:26.259964338 +0000 UTC m=+147.412379603" Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.276852 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:26 crc kubenswrapper[4858]: E0202 17:17:26.277811 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:26.777796085 +0000 UTC m=+147.930211350 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.282856 4858 patch_prober.go:28] interesting pod/router-default-5444994796-frw2d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 17:17:26 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Feb 02 17:17:26 crc kubenswrapper[4858]: [+]process-running ok Feb 02 17:17:26 crc kubenswrapper[4858]: healthz check failed Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.282928 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-frw2d" podUID="4af8047b-d906-4458-84e9-4cbefe269b59" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.333183 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bhw7l" podStartSLOduration=126.333161852 podStartE2EDuration="2m6.333161852s" podCreationTimestamp="2026-02-02 17:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:17:26.305290718 +0000 UTC m=+147.457705993" watchObservedRunningTime="2026-02-02 17:17:26.333161852 +0000 UTC m=+147.485577127" Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.378557 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:26 crc kubenswrapper[4858]: E0202 17:17:26.378938 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:26.878919905 +0000 UTC m=+148.031335170 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.394552 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lfg9z" podStartSLOduration=125.394527257 podStartE2EDuration="2m5.394527257s" podCreationTimestamp="2026-02-02 17:15:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:17:26.33139861 +0000 UTC m=+147.483813875" watchObservedRunningTime="2026-02-02 17:17:26.394527257 +0000 UTC m=+147.546942532" Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.395488 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.395720 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.435839 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-q5rqg" podStartSLOduration=7.435820598 podStartE2EDuration="7.435820598s" podCreationTimestamp="2026-02-02 17:17:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:17:26.395353171 +0000 UTC m=+147.547768436" watchObservedRunningTime="2026-02-02 17:17:26.435820598 +0000 UTC m=+147.588235863" Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.437155 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9dfzw" podStartSLOduration=125.437146957 podStartE2EDuration="2m5.437146957s" podCreationTimestamp="2026-02-02 17:15:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:17:26.43151758 +0000 UTC m=+147.583932855" watchObservedRunningTime="2026-02-02 17:17:26.437146957 +0000 UTC m=+147.589562212" Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.454253 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lvvvj" Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.454333 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lvvvj" Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.456215 4858 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-lvvvj container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.456301 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lvvvj" podUID="5109f31b-0e6e-447b-90cc-78ebbc465626" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.478737 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nxtrt" podStartSLOduration=125.478714536 podStartE2EDuration="2m5.478714536s" podCreationTimestamp="2026-02-02 17:15:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:17:26.476575222 +0000 UTC m=+147.628990487" watchObservedRunningTime="2026-02-02 17:17:26.478714536 +0000 UTC m=+147.631129801" Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.493089 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:26 crc kubenswrapper[4858]: E0202 17:17:26.493219 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:26.993199514 +0000 UTC m=+148.145614779 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.493572 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:26 crc kubenswrapper[4858]: E0202 17:17:26.494016 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:26.993999808 +0000 UTC m=+148.146415073 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.501712 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jxr6v" podStartSLOduration=125.501691234 podStartE2EDuration="2m5.501691234s" podCreationTimestamp="2026-02-02 17:15:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:17:26.501324483 +0000 UTC m=+147.653739748" watchObservedRunningTime="2026-02-02 17:17:26.501691234 +0000 UTC m=+147.654106499" Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.594357 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:26 crc kubenswrapper[4858]: E0202 17:17:26.594950 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:27.094933361 +0000 UTC m=+148.247348626 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.610462 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wd5ld" podStartSLOduration=126.610444759 podStartE2EDuration="2m6.610444759s" podCreationTimestamp="2026-02-02 17:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:17:26.548516648 +0000 UTC m=+147.700931913" watchObservedRunningTime="2026-02-02 17:17:26.610444759 +0000 UTC m=+147.762860024" Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.680897 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lvvvj" podStartSLOduration=125.680879112 podStartE2EDuration="2m5.680879112s" podCreationTimestamp="2026-02-02 17:15:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:17:26.679233983 +0000 UTC m=+147.831649248" watchObservedRunningTime="2026-02-02 17:17:26.680879112 +0000 UTC m=+147.833294377" Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.681741 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-wf65d" podStartSLOduration=7.681734767 podStartE2EDuration="7.681734767s" podCreationTimestamp="2026-02-02 17:17:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:17:26.612517271 +0000 UTC m=+147.764932536" watchObservedRunningTime="2026-02-02 17:17:26.681734767 +0000 UTC m=+147.834150032" Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.695700 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:26 crc kubenswrapper[4858]: E0202 17:17:26.696184 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:27.196171964 +0000 UTC m=+148.348587219 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.717522 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mj6r9" podStartSLOduration=125.717504355 podStartE2EDuration="2m5.717504355s" podCreationTimestamp="2026-02-02 17:15:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:17:26.715092543 +0000 UTC m=+147.867507808" watchObservedRunningTime="2026-02-02 17:17:26.717504355 +0000 UTC m=+147.869919620" Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.797560 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:26 crc kubenswrapper[4858]: E0202 17:17:26.797910 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:27.297894181 +0000 UTC m=+148.450309446 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:26 crc kubenswrapper[4858]: I0202 17:17:26.899073 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:26 crc kubenswrapper[4858]: E0202 17:17:26.899511 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:27.399490425 +0000 UTC m=+148.551905730 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.000186 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:27 crc kubenswrapper[4858]: E0202 17:17:27.000374 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:27.500348907 +0000 UTC m=+148.652764182 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.001155 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:27 crc kubenswrapper[4858]: E0202 17:17:27.001526 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:27.501515762 +0000 UTC m=+148.653931087 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.071693 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-ffh76 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.071713 4858 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-nd9bb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.071745 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ffh76" podUID="fd740a32-9003-4d27-8c7b-3423717fd9bf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.071751 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-nd9bb" podUID="89d9c9f7-5f7c-4cc0-add0-bd38785c308e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.071854 4858 patch_prober.go:28] interesting pod/console-operator-58897d9998-rnrz8 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/readyz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.071902 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-rnrz8" podUID="d8edb432-a713-4b13-a42d-510636c08f81" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/readyz\": dial tcp 10.217.0.16:8443: connect: connection refused" Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.102212 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:27 crc kubenswrapper[4858]: E0202 17:17:27.102407 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:27.602382154 +0000 UTC m=+148.754797419 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.102832 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:27 crc kubenswrapper[4858]: E0202 17:17:27.103863 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:27.603851427 +0000 UTC m=+148.756266692 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.204940 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:27 crc kubenswrapper[4858]: E0202 17:17:27.205111 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:27.70508524 +0000 UTC m=+148.857500505 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.205363 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:27 crc kubenswrapper[4858]: E0202 17:17:27.205708 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:27.705695818 +0000 UTC m=+148.858111083 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.269858 4858 patch_prober.go:28] interesting pod/router-default-5444994796-frw2d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 17:17:27 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Feb 02 17:17:27 crc kubenswrapper[4858]: [+]process-running ok Feb 02 17:17:27 crc kubenswrapper[4858]: healthz check failed Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.270168 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-frw2d" podUID="4af8047b-d906-4458-84e9-4cbefe269b59" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.306369 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:27 crc kubenswrapper[4858]: E0202 17:17:27.306952 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:27.806937132 +0000 UTC m=+148.959352397 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.396103 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qtpsf"] Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.396975 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qtpsf" Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.404845 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.408250 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b4f9546-2d15-4925-aba0-40e3b10098a0-utilities\") pod \"community-operators-qtpsf\" (UID: \"9b4f9546-2d15-4925-aba0-40e3b10098a0\") " pod="openshift-marketplace/community-operators-qtpsf" Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.408287 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b4f9546-2d15-4925-aba0-40e3b10098a0-catalog-content\") pod \"community-operators-qtpsf\" (UID: \"9b4f9546-2d15-4925-aba0-40e3b10098a0\") " pod="openshift-marketplace/community-operators-qtpsf" Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.408359 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvvxl\" (UniqueName: \"kubernetes.io/projected/9b4f9546-2d15-4925-aba0-40e3b10098a0-kube-api-access-wvvxl\") pod \"community-operators-qtpsf\" (UID: \"9b4f9546-2d15-4925-aba0-40e3b10098a0\") " pod="openshift-marketplace/community-operators-qtpsf" Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.408395 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:27 crc kubenswrapper[4858]: E0202 17:17:27.408670 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:27.908657239 +0000 UTC m=+149.061072504 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.460200 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qtpsf"] Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.509894 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:27 crc kubenswrapper[4858]: E0202 17:17:27.510074 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:28.010050337 +0000 UTC m=+149.162465602 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.510519 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b4f9546-2d15-4925-aba0-40e3b10098a0-utilities\") pod \"community-operators-qtpsf\" (UID: \"9b4f9546-2d15-4925-aba0-40e3b10098a0\") " pod="openshift-marketplace/community-operators-qtpsf" Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.510618 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b4f9546-2d15-4925-aba0-40e3b10098a0-catalog-content\") pod \"community-operators-qtpsf\" (UID: \"9b4f9546-2d15-4925-aba0-40e3b10098a0\") " pod="openshift-marketplace/community-operators-qtpsf" Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.510713 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvvxl\" (UniqueName: \"kubernetes.io/projected/9b4f9546-2d15-4925-aba0-40e3b10098a0-kube-api-access-wvvxl\") pod \"community-operators-qtpsf\" (UID: \"9b4f9546-2d15-4925-aba0-40e3b10098a0\") " pod="openshift-marketplace/community-operators-qtpsf" Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.510873 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.510935 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b4f9546-2d15-4925-aba0-40e3b10098a0-utilities\") pod \"community-operators-qtpsf\" (UID: \"9b4f9546-2d15-4925-aba0-40e3b10098a0\") " pod="openshift-marketplace/community-operators-qtpsf" Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.511075 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b4f9546-2d15-4925-aba0-40e3b10098a0-catalog-content\") pod \"community-operators-qtpsf\" (UID: \"9b4f9546-2d15-4925-aba0-40e3b10098a0\") " pod="openshift-marketplace/community-operators-qtpsf" Feb 02 17:17:27 crc kubenswrapper[4858]: E0202 17:17:27.511259 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:28.011244352 +0000 UTC m=+149.163659617 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.550898 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvvxl\" (UniqueName: \"kubernetes.io/projected/9b4f9546-2d15-4925-aba0-40e3b10098a0-kube-api-access-wvvxl\") pod \"community-operators-qtpsf\" (UID: \"9b4f9546-2d15-4925-aba0-40e3b10098a0\") " pod="openshift-marketplace/community-operators-qtpsf" Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.592403 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7x482"] Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.600440 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7x482" Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.605387 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.611950 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:27 crc kubenswrapper[4858]: E0202 17:17:27.612512 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:28.112474865 +0000 UTC m=+149.264890130 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.613022 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:27 crc kubenswrapper[4858]: E0202 17:17:27.613395 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:28.113386852 +0000 UTC m=+149.265802117 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.613574 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69eb2d24-ee9f-4ef2-8bf0-233099196e0d-catalog-content\") pod \"certified-operators-7x482\" (UID: \"69eb2d24-ee9f-4ef2-8bf0-233099196e0d\") " pod="openshift-marketplace/certified-operators-7x482" Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.613769 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69eb2d24-ee9f-4ef2-8bf0-233099196e0d-utilities\") pod \"certified-operators-7x482\" (UID: \"69eb2d24-ee9f-4ef2-8bf0-233099196e0d\") " pod="openshift-marketplace/certified-operators-7x482" Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.613916 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dmmc\" (UniqueName: \"kubernetes.io/projected/69eb2d24-ee9f-4ef2-8bf0-233099196e0d-kube-api-access-7dmmc\") pod \"certified-operators-7x482\" (UID: \"69eb2d24-ee9f-4ef2-8bf0-233099196e0d\") " pod="openshift-marketplace/certified-operators-7x482" Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.623016 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7x482"] Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.716526 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.716807 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69eb2d24-ee9f-4ef2-8bf0-233099196e0d-catalog-content\") pod \"certified-operators-7x482\" (UID: \"69eb2d24-ee9f-4ef2-8bf0-233099196e0d\") " pod="openshift-marketplace/certified-operators-7x482" Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.716905 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69eb2d24-ee9f-4ef2-8bf0-233099196e0d-utilities\") pod \"certified-operators-7x482\" (UID: \"69eb2d24-ee9f-4ef2-8bf0-233099196e0d\") " pod="openshift-marketplace/certified-operators-7x482" Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.716964 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dmmc\" (UniqueName: \"kubernetes.io/projected/69eb2d24-ee9f-4ef2-8bf0-233099196e0d-kube-api-access-7dmmc\") pod \"certified-operators-7x482\" (UID: \"69eb2d24-ee9f-4ef2-8bf0-233099196e0d\") " pod="openshift-marketplace/certified-operators-7x482" Feb 02 17:17:27 crc kubenswrapper[4858]: E0202 17:17:27.717395 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:28.217376317 +0000 UTC m=+149.369791592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.717861 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69eb2d24-ee9f-4ef2-8bf0-233099196e0d-catalog-content\") pod \"certified-operators-7x482\" (UID: \"69eb2d24-ee9f-4ef2-8bf0-233099196e0d\") " pod="openshift-marketplace/certified-operators-7x482" Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.718155 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69eb2d24-ee9f-4ef2-8bf0-233099196e0d-utilities\") pod \"certified-operators-7x482\" (UID: \"69eb2d24-ee9f-4ef2-8bf0-233099196e0d\") " pod="openshift-marketplace/certified-operators-7x482" Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.720061 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qtpsf" Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.748671 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dmmc\" (UniqueName: \"kubernetes.io/projected/69eb2d24-ee9f-4ef2-8bf0-233099196e0d-kube-api-access-7dmmc\") pod \"certified-operators-7x482\" (UID: \"69eb2d24-ee9f-4ef2-8bf0-233099196e0d\") " pod="openshift-marketplace/certified-operators-7x482" Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.802855 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cncg2"] Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.803849 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cncg2" Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.816098 4858 patch_prober.go:28] interesting pod/machine-config-daemon-lbvl2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.816151 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.818121 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6a77909-6aaf-4339-84fc-a3121e8d15f3-utilities\") pod \"community-operators-cncg2\" (UID: \"c6a77909-6aaf-4339-84fc-a3121e8d15f3\") " pod="openshift-marketplace/community-operators-cncg2" Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.818168 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.818251 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6a77909-6aaf-4339-84fc-a3121e8d15f3-catalog-content\") pod \"community-operators-cncg2\" (UID: \"c6a77909-6aaf-4339-84fc-a3121e8d15f3\") " pod="openshift-marketplace/community-operators-cncg2" Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.818274 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p86ft\" (UniqueName: \"kubernetes.io/projected/c6a77909-6aaf-4339-84fc-a3121e8d15f3-kube-api-access-p86ft\") pod \"community-operators-cncg2\" (UID: \"c6a77909-6aaf-4339-84fc-a3121e8d15f3\") " pod="openshift-marketplace/community-operators-cncg2" Feb 02 17:17:27 crc kubenswrapper[4858]: E0202 17:17:27.818557 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:28.318544688 +0000 UTC m=+149.470959953 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.819416 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cncg2"] Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.850324 4858 patch_prober.go:28] interesting pod/apiserver-76f77b778f-9wxc4 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 02 17:17:27 crc kubenswrapper[4858]: [+]log ok Feb 02 17:17:27 crc kubenswrapper[4858]: [+]etcd ok Feb 02 17:17:27 crc kubenswrapper[4858]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 02 17:17:27 crc kubenswrapper[4858]: [+]poststarthook/generic-apiserver-start-informers ok Feb 02 17:17:27 crc kubenswrapper[4858]: [+]poststarthook/max-in-flight-filter ok Feb 02 17:17:27 crc kubenswrapper[4858]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 02 17:17:27 crc kubenswrapper[4858]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 02 17:17:27 crc kubenswrapper[4858]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 02 17:17:27 crc kubenswrapper[4858]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 02 17:17:27 crc kubenswrapper[4858]: [+]poststarthook/project.openshift.io-projectcache ok Feb 02 17:17:27 crc kubenswrapper[4858]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 02 17:17:27 crc kubenswrapper[4858]: [+]poststarthook/openshift.io-startinformers ok Feb 02 17:17:27 crc kubenswrapper[4858]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 02 17:17:27 crc kubenswrapper[4858]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 02 17:17:27 crc kubenswrapper[4858]: livez check failed Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.850836 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" podUID="42967b1d-ac6e-47d5-b67a-c7e5e8adc7d0" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.871909 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nxtrt" Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.925415 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.925563 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6a77909-6aaf-4339-84fc-a3121e8d15f3-utilities\") pod \"community-operators-cncg2\" (UID: \"c6a77909-6aaf-4339-84fc-a3121e8d15f3\") " pod="openshift-marketplace/community-operators-cncg2" Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.925667 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6a77909-6aaf-4339-84fc-a3121e8d15f3-catalog-content\") pod \"community-operators-cncg2\" (UID: \"c6a77909-6aaf-4339-84fc-a3121e8d15f3\") " pod="openshift-marketplace/community-operators-cncg2" Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.925694 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p86ft\" (UniqueName: \"kubernetes.io/projected/c6a77909-6aaf-4339-84fc-a3121e8d15f3-kube-api-access-p86ft\") pod \"community-operators-cncg2\" (UID: \"c6a77909-6aaf-4339-84fc-a3121e8d15f3\") " pod="openshift-marketplace/community-operators-cncg2" Feb 02 17:17:27 crc kubenswrapper[4858]: E0202 17:17:27.926010 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:28.425996434 +0000 UTC m=+149.578411699 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.926319 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6a77909-6aaf-4339-84fc-a3121e8d15f3-utilities\") pod \"community-operators-cncg2\" (UID: \"c6a77909-6aaf-4339-84fc-a3121e8d15f3\") " pod="openshift-marketplace/community-operators-cncg2" Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.926534 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6a77909-6aaf-4339-84fc-a3121e8d15f3-catalog-content\") pod \"community-operators-cncg2\" (UID: \"c6a77909-6aaf-4339-84fc-a3121e8d15f3\") " pod="openshift-marketplace/community-operators-cncg2" Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.949248 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7x482" Feb 02 17:17:27 crc kubenswrapper[4858]: I0202 17:17:27.953755 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p86ft\" (UniqueName: \"kubernetes.io/projected/c6a77909-6aaf-4339-84fc-a3121e8d15f3-kube-api-access-p86ft\") pod \"community-operators-cncg2\" (UID: \"c6a77909-6aaf-4339-84fc-a3121e8d15f3\") " pod="openshift-marketplace/community-operators-cncg2" Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.008467 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vqhch"] Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.009642 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vqhch" Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.022297 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vqhch"] Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.027176 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrhl2\" (UniqueName: \"kubernetes.io/projected/2bd40f69-131b-4d0c-87d9-bfae63f9a4eb-kube-api-access-wrhl2\") pod \"certified-operators-vqhch\" (UID: \"2bd40f69-131b-4d0c-87d9-bfae63f9a4eb\") " pod="openshift-marketplace/certified-operators-vqhch" Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.027264 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bd40f69-131b-4d0c-87d9-bfae63f9a4eb-catalog-content\") pod \"certified-operators-vqhch\" (UID: \"2bd40f69-131b-4d0c-87d9-bfae63f9a4eb\") " pod="openshift-marketplace/certified-operators-vqhch" Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.027309 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.027367 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bd40f69-131b-4d0c-87d9-bfae63f9a4eb-utilities\") pod \"certified-operators-vqhch\" (UID: \"2bd40f69-131b-4d0c-87d9-bfae63f9a4eb\") " pod="openshift-marketplace/certified-operators-vqhch" Feb 02 17:17:28 crc kubenswrapper[4858]: E0202 17:17:28.027782 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:28.527770393 +0000 UTC m=+149.680185658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.128535 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.128761 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bd40f69-131b-4d0c-87d9-bfae63f9a4eb-utilities\") pod \"certified-operators-vqhch\" (UID: \"2bd40f69-131b-4d0c-87d9-bfae63f9a4eb\") " pod="openshift-marketplace/certified-operators-vqhch" Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.128804 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrhl2\" (UniqueName: \"kubernetes.io/projected/2bd40f69-131b-4d0c-87d9-bfae63f9a4eb-kube-api-access-wrhl2\") pod \"certified-operators-vqhch\" (UID: \"2bd40f69-131b-4d0c-87d9-bfae63f9a4eb\") " pod="openshift-marketplace/certified-operators-vqhch" Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.128850 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bd40f69-131b-4d0c-87d9-bfae63f9a4eb-catalog-content\") pod \"certified-operators-vqhch\" (UID: \"2bd40f69-131b-4d0c-87d9-bfae63f9a4eb\") " pod="openshift-marketplace/certified-operators-vqhch" Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.129385 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bd40f69-131b-4d0c-87d9-bfae63f9a4eb-catalog-content\") pod \"certified-operators-vqhch\" (UID: \"2bd40f69-131b-4d0c-87d9-bfae63f9a4eb\") " pod="openshift-marketplace/certified-operators-vqhch" Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.129495 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cncg2" Feb 02 17:17:28 crc kubenswrapper[4858]: E0202 17:17:28.129892 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:28.629877112 +0000 UTC m=+149.782292377 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.130109 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bd40f69-131b-4d0c-87d9-bfae63f9a4eb-utilities\") pod \"certified-operators-vqhch\" (UID: \"2bd40f69-131b-4d0c-87d9-bfae63f9a4eb\") " pod="openshift-marketplace/certified-operators-vqhch" Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.150081 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xr77f" event={"ID":"92f95f3e-0d8e-4ba7-a68f-438ca7f8c610","Type":"ContainerStarted","Data":"65d30dd4e390c0a71efe234449800ce8f1b0888a441f9ef8580577db39b4b75a"} Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.150112 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xr77f" event={"ID":"92f95f3e-0d8e-4ba7-a68f-438ca7f8c610","Type":"ContainerStarted","Data":"2598d8c78aab69a12704e93745ebf9cf5139717117534cde320d2ce71ebac0bd"} Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.150122 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xr77f" event={"ID":"92f95f3e-0d8e-4ba7-a68f-438ca7f8c610","Type":"ContainerStarted","Data":"2d80c3510ae1922713444ba27b20fef2b6998bcb887505e5258ab61e90aecbbc"} Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.150645 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-ffh76 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.150672 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ffh76" podUID="fd740a32-9003-4d27-8c7b-3423717fd9bf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.156672 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-45snr" Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.157205 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrhl2\" (UniqueName: \"kubernetes.io/projected/2bd40f69-131b-4d0c-87d9-bfae63f9a4eb-kube-api-access-wrhl2\") pod \"certified-operators-vqhch\" (UID: \"2bd40f69-131b-4d0c-87d9-bfae63f9a4eb\") " pod="openshift-marketplace/certified-operators-vqhch" Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.211795 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-xr77f" podStartSLOduration=9.211780194 podStartE2EDuration="9.211780194s" podCreationTimestamp="2026-02-02 17:17:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:17:28.187258019 +0000 UTC m=+149.339673284" watchObservedRunningTime="2026-02-02 17:17:28.211780194 +0000 UTC m=+149.364195449" Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.230924 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:28 crc kubenswrapper[4858]: E0202 17:17:28.233568 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:28.733553938 +0000 UTC m=+149.885969203 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.271851 4858 patch_prober.go:28] interesting pod/router-default-5444994796-frw2d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 17:17:28 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Feb 02 17:17:28 crc kubenswrapper[4858]: [+]process-running ok Feb 02 17:17:28 crc kubenswrapper[4858]: healthz check failed Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.271905 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-frw2d" podUID="4af8047b-d906-4458-84e9-4cbefe269b59" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.297864 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qtpsf"] Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.331855 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.331966 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.332004 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.332025 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.340102 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:17:28 crc kubenswrapper[4858]: E0202 17:17:28.341285 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:28.841270482 +0000 UTC m=+149.993685747 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.343501 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.343765 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vqhch" Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.349334 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.434588 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.435558 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.435823 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:28 crc kubenswrapper[4858]: E0202 17:17:28.436105 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:28.936093376 +0000 UTC m=+150.088508641 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.458580 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.472718 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.498772 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7x482"] Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.536365 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:28 crc kubenswrapper[4858]: E0202 17:17:28.536731 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:29.036714881 +0000 UTC m=+150.189130146 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.573868 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.577706 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.580336 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.595395 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.612399 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.629575 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cncg2"] Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.637697 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:28 crc kubenswrapper[4858]: E0202 17:17:28.638010 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:29.137997835 +0000 UTC m=+150.290413100 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.656164 4858 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.721697 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.743327 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.743503 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7d87b6b7-9dbb-4ba9-9909-d9098b6f19b6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7d87b6b7-9dbb-4ba9-9909-d9098b6f19b6\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.743605 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7d87b6b7-9dbb-4ba9-9909-d9098b6f19b6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7d87b6b7-9dbb-4ba9-9909-d9098b6f19b6\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 17:17:28 crc kubenswrapper[4858]: E0202 17:17:28.743739 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:29.243722621 +0000 UTC m=+150.396137886 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.844887 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7d87b6b7-9dbb-4ba9-9909-d9098b6f19b6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7d87b6b7-9dbb-4ba9-9909-d9098b6f19b6\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.844936 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7d87b6b7-9dbb-4ba9-9909-d9098b6f19b6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7d87b6b7-9dbb-4ba9-9909-d9098b6f19b6\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.845019 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.845043 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7d87b6b7-9dbb-4ba9-9909-d9098b6f19b6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7d87b6b7-9dbb-4ba9-9909-d9098b6f19b6\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 17:17:28 crc kubenswrapper[4858]: E0202 17:17:28.845583 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:29.345572522 +0000 UTC m=+150.497987787 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.869642 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7d87b6b7-9dbb-4ba9-9909-d9098b6f19b6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7d87b6b7-9dbb-4ba9-9909-d9098b6f19b6\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.934216 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.939445 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vqhch"] Feb 02 17:17:28 crc kubenswrapper[4858]: I0202 17:17:28.948191 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:28 crc kubenswrapper[4858]: E0202 17:17:28.948570 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:29.448529766 +0000 UTC m=+150.600945041 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:28 crc kubenswrapper[4858]: W0202 17:17:28.963804 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2bd40f69_131b_4d0c_87d9_bfae63f9a4eb.slice/crio-647ea2a44767336b9b7204659db105e169f82d3a59e3b9b109f979981e0ecbf2 WatchSource:0}: Error finding container 647ea2a44767336b9b7204659db105e169f82d3a59e3b9b109f979981e0ecbf2: Status 404 returned error can't find the container with id 647ea2a44767336b9b7204659db105e169f82d3a59e3b9b109f979981e0ecbf2 Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.049421 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:29 crc kubenswrapper[4858]: E0202 17:17:29.049964 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:29.549952405 +0000 UTC m=+150.702367660 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:29 crc kubenswrapper[4858]: W0202 17:17:29.069316 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-d81baefcbd13055c9975c69582c971c6d3dafa0f39e9db95dc6b321fe079af17 WatchSource:0}: Error finding container d81baefcbd13055c9975c69582c971c6d3dafa0f39e9db95dc6b321fe079af17: Status 404 returned error can't find the container with id d81baefcbd13055c9975c69582c971c6d3dafa0f39e9db95dc6b321fe079af17 Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.149947 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:29 crc kubenswrapper[4858]: E0202 17:17:29.150080 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:29.650059485 +0000 UTC m=+150.802474750 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.150431 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:29 crc kubenswrapper[4858]: E0202 17:17:29.150695 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:29.650684393 +0000 UTC m=+150.803099658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.156733 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"ea63e050d60639826a7332d206d56f3bd23fb8fef2ac3c41a0a498f5c0e1b5e4"} Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.159357 4858 generic.go:334] "Generic (PLEG): container finished" podID="69eb2d24-ee9f-4ef2-8bf0-233099196e0d" containerID="be20214a6c5d05ca3d39d99b54c35104c3b89fd6cd0cb11c5130f46607285074" exitCode=0 Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.159422 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7x482" event={"ID":"69eb2d24-ee9f-4ef2-8bf0-233099196e0d","Type":"ContainerDied","Data":"be20214a6c5d05ca3d39d99b54c35104c3b89fd6cd0cb11c5130f46607285074"} Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.159445 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7x482" event={"ID":"69eb2d24-ee9f-4ef2-8bf0-233099196e0d","Type":"ContainerStarted","Data":"49ca3ab742df3cd8426ab89c6ab6f5c9a0c439ae6c64e6d3b1901bdd33bea0e7"} Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.161720 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.166028 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"37dae0e268e6c60111363ea37259f085db5e5d47e928ea976e6b0e692a52a32e"} Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.167264 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vqhch" event={"ID":"2bd40f69-131b-4d0c-87d9-bfae63f9a4eb","Type":"ContainerStarted","Data":"647ea2a44767336b9b7204659db105e169f82d3a59e3b9b109f979981e0ecbf2"} Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.188309 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d81baefcbd13055c9975c69582c971c6d3dafa0f39e9db95dc6b321fe079af17"} Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.190425 4858 generic.go:334] "Generic (PLEG): container finished" podID="c6a77909-6aaf-4339-84fc-a3121e8d15f3" containerID="f502d1bbd8fa02f03cafe02b39c277f1d510bc99b5f9ac56fbd0d7f287b192d5" exitCode=0 Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.190468 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cncg2" event={"ID":"c6a77909-6aaf-4339-84fc-a3121e8d15f3","Type":"ContainerDied","Data":"f502d1bbd8fa02f03cafe02b39c277f1d510bc99b5f9ac56fbd0d7f287b192d5"} Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.190488 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cncg2" event={"ID":"c6a77909-6aaf-4339-84fc-a3121e8d15f3","Type":"ContainerStarted","Data":"127aa6a7481c55b4eb899b45d7a0f7f7066d7b29d8d60f2aa7ce625f32469df9"} Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.192245 4858 generic.go:334] "Generic (PLEG): container finished" podID="9b4f9546-2d15-4925-aba0-40e3b10098a0" containerID="8dd825bdd6147d4e0f0ace09def7945abbca9ad5bef70bcb2e0492534732fe8d" exitCode=0 Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.193212 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qtpsf" event={"ID":"9b4f9546-2d15-4925-aba0-40e3b10098a0","Type":"ContainerDied","Data":"8dd825bdd6147d4e0f0ace09def7945abbca9ad5bef70bcb2e0492534732fe8d"} Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.193243 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qtpsf" event={"ID":"9b4f9546-2d15-4925-aba0-40e3b10098a0","Type":"ContainerStarted","Data":"f1e68fa4c1bfa59c96af8c6de43552c3749077f455e8135c6ac1c224fe4a91c5"} Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.251304 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:29 crc kubenswrapper[4858]: E0202 17:17:29.251614 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:29.751600887 +0000 UTC m=+150.904016142 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.256919 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.268952 4858 patch_prober.go:28] interesting pod/router-default-5444994796-frw2d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 17:17:29 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Feb 02 17:17:29 crc kubenswrapper[4858]: [+]process-running ok Feb 02 17:17:29 crc kubenswrapper[4858]: healthz check failed Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.269038 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-frw2d" podUID="4af8047b-d906-4458-84e9-4cbefe269b59" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.352608 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:29 crc kubenswrapper[4858]: E0202 17:17:29.353580 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:29.853561531 +0000 UTC m=+151.005976796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.454095 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:29 crc kubenswrapper[4858]: E0202 17:17:29.454216 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:29.954194446 +0000 UTC m=+151.106609721 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.454563 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:29 crc kubenswrapper[4858]: E0202 17:17:29.454860 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:29.954850946 +0000 UTC m=+151.107266211 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.555651 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:29 crc kubenswrapper[4858]: E0202 17:17:29.555818 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 17:17:30.05579169 +0000 UTC m=+151.208206965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.556084 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:29 crc kubenswrapper[4858]: E0202 17:17:29.556468 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 17:17:30.05644976 +0000 UTC m=+151.208865025 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j5zlt" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.576216 4858 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-02T17:17:28.656190553Z","Handler":null,"Name":""} Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.583394 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vmzkp"] Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.584744 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vmzkp" Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.586538 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.590820 4858 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.590862 4858 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.599303 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vmzkp"] Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.658134 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.659014 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a32894ac-052e-4a93-a3d1-79aeec5b8869-catalog-content\") pod \"redhat-marketplace-vmzkp\" (UID: \"a32894ac-052e-4a93-a3d1-79aeec5b8869\") " pod="openshift-marketplace/redhat-marketplace-vmzkp" Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.664360 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9knkc\" (UniqueName: \"kubernetes.io/projected/a32894ac-052e-4a93-a3d1-79aeec5b8869-kube-api-access-9knkc\") pod \"redhat-marketplace-vmzkp\" (UID: \"a32894ac-052e-4a93-a3d1-79aeec5b8869\") " pod="openshift-marketplace/redhat-marketplace-vmzkp" Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.664443 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a32894ac-052e-4a93-a3d1-79aeec5b8869-utilities\") pod \"redhat-marketplace-vmzkp\" (UID: \"a32894ac-052e-4a93-a3d1-79aeec5b8869\") " pod="openshift-marketplace/redhat-marketplace-vmzkp" Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.670778 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.765457 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9knkc\" (UniqueName: \"kubernetes.io/projected/a32894ac-052e-4a93-a3d1-79aeec5b8869-kube-api-access-9knkc\") pod \"redhat-marketplace-vmzkp\" (UID: \"a32894ac-052e-4a93-a3d1-79aeec5b8869\") " pod="openshift-marketplace/redhat-marketplace-vmzkp" Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.765511 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a32894ac-052e-4a93-a3d1-79aeec5b8869-utilities\") pod \"redhat-marketplace-vmzkp\" (UID: \"a32894ac-052e-4a93-a3d1-79aeec5b8869\") " pod="openshift-marketplace/redhat-marketplace-vmzkp" Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.765552 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.765585 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a32894ac-052e-4a93-a3d1-79aeec5b8869-catalog-content\") pod \"redhat-marketplace-vmzkp\" (UID: \"a32894ac-052e-4a93-a3d1-79aeec5b8869\") " pod="openshift-marketplace/redhat-marketplace-vmzkp" Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.765996 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a32894ac-052e-4a93-a3d1-79aeec5b8869-catalog-content\") pod \"redhat-marketplace-vmzkp\" (UID: \"a32894ac-052e-4a93-a3d1-79aeec5b8869\") " pod="openshift-marketplace/redhat-marketplace-vmzkp" Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.766462 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a32894ac-052e-4a93-a3d1-79aeec5b8869-utilities\") pod \"redhat-marketplace-vmzkp\" (UID: \"a32894ac-052e-4a93-a3d1-79aeec5b8869\") " pod="openshift-marketplace/redhat-marketplace-vmzkp" Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.768284 4858 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.768312 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.799192 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9knkc\" (UniqueName: \"kubernetes.io/projected/a32894ac-052e-4a93-a3d1-79aeec5b8869-kube-api-access-9knkc\") pod \"redhat-marketplace-vmzkp\" (UID: \"a32894ac-052e-4a93-a3d1-79aeec5b8869\") " pod="openshift-marketplace/redhat-marketplace-vmzkp" Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.810493 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j5zlt\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.901688 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vmzkp" Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.973128 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.990671 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w9kqv"] Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.992190 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w9kqv" Feb 02 17:17:29 crc kubenswrapper[4858]: I0202 17:17:29.999386 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w9kqv"] Feb 02 17:17:30 crc kubenswrapper[4858]: I0202 17:17:30.069348 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58698d7f-881d-44c8-8457-9595f4953b9f-utilities\") pod \"redhat-marketplace-w9kqv\" (UID: \"58698d7f-881d-44c8-8457-9595f4953b9f\") " pod="openshift-marketplace/redhat-marketplace-w9kqv" Feb 02 17:17:30 crc kubenswrapper[4858]: I0202 17:17:30.069411 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58698d7f-881d-44c8-8457-9595f4953b9f-catalog-content\") pod \"redhat-marketplace-w9kqv\" (UID: \"58698d7f-881d-44c8-8457-9595f4953b9f\") " pod="openshift-marketplace/redhat-marketplace-w9kqv" Feb 02 17:17:30 crc kubenswrapper[4858]: I0202 17:17:30.069453 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgg8w\" (UniqueName: \"kubernetes.io/projected/58698d7f-881d-44c8-8457-9595f4953b9f-kube-api-access-dgg8w\") pod \"redhat-marketplace-w9kqv\" (UID: \"58698d7f-881d-44c8-8457-9595f4953b9f\") " pod="openshift-marketplace/redhat-marketplace-w9kqv" Feb 02 17:17:30 crc kubenswrapper[4858]: I0202 17:17:30.170466 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgg8w\" (UniqueName: \"kubernetes.io/projected/58698d7f-881d-44c8-8457-9595f4953b9f-kube-api-access-dgg8w\") pod \"redhat-marketplace-w9kqv\" (UID: \"58698d7f-881d-44c8-8457-9595f4953b9f\") " pod="openshift-marketplace/redhat-marketplace-w9kqv" Feb 02 17:17:30 crc kubenswrapper[4858]: I0202 17:17:30.170580 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58698d7f-881d-44c8-8457-9595f4953b9f-utilities\") pod \"redhat-marketplace-w9kqv\" (UID: \"58698d7f-881d-44c8-8457-9595f4953b9f\") " pod="openshift-marketplace/redhat-marketplace-w9kqv" Feb 02 17:17:30 crc kubenswrapper[4858]: I0202 17:17:30.170616 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58698d7f-881d-44c8-8457-9595f4953b9f-catalog-content\") pod \"redhat-marketplace-w9kqv\" (UID: \"58698d7f-881d-44c8-8457-9595f4953b9f\") " pod="openshift-marketplace/redhat-marketplace-w9kqv" Feb 02 17:17:30 crc kubenswrapper[4858]: I0202 17:17:30.171541 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58698d7f-881d-44c8-8457-9595f4953b9f-utilities\") pod \"redhat-marketplace-w9kqv\" (UID: \"58698d7f-881d-44c8-8457-9595f4953b9f\") " pod="openshift-marketplace/redhat-marketplace-w9kqv" Feb 02 17:17:30 crc kubenswrapper[4858]: I0202 17:17:30.171624 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58698d7f-881d-44c8-8457-9595f4953b9f-catalog-content\") pod \"redhat-marketplace-w9kqv\" (UID: \"58698d7f-881d-44c8-8457-9595f4953b9f\") " pod="openshift-marketplace/redhat-marketplace-w9kqv" Feb 02 17:17:30 crc kubenswrapper[4858]: I0202 17:17:30.199790 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgg8w\" (UniqueName: \"kubernetes.io/projected/58698d7f-881d-44c8-8457-9595f4953b9f-kube-api-access-dgg8w\") pod \"redhat-marketplace-w9kqv\" (UID: \"58698d7f-881d-44c8-8457-9595f4953b9f\") " pod="openshift-marketplace/redhat-marketplace-w9kqv" Feb 02 17:17:30 crc kubenswrapper[4858]: I0202 17:17:30.208605 4858 generic.go:334] "Generic (PLEG): container finished" podID="2bd40f69-131b-4d0c-87d9-bfae63f9a4eb" containerID="fa92b68e5c3362010f6119fa7a2db7c9c0367c2d9d59a82abec419adbb25d62e" exitCode=0 Feb 02 17:17:30 crc kubenswrapper[4858]: I0202 17:17:30.208714 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vqhch" event={"ID":"2bd40f69-131b-4d0c-87d9-bfae63f9a4eb","Type":"ContainerDied","Data":"fa92b68e5c3362010f6119fa7a2db7c9c0367c2d9d59a82abec419adbb25d62e"} Feb 02 17:17:30 crc kubenswrapper[4858]: I0202 17:17:30.210714 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"36c9774a47ef19f3c0184997dd592aa6623706a2da2ac4aa092432e442ef819e"} Feb 02 17:17:30 crc kubenswrapper[4858]: I0202 17:17:30.219849 4858 generic.go:334] "Generic (PLEG): container finished" podID="f3c72db6-4315-4210-9cfe-3c27b18e4abd" containerID="2d2736426dc9b8cf377bc45320176e344e9a75b7b04efaf3a097fdd13f77bb21" exitCode=0 Feb 02 17:17:30 crc kubenswrapper[4858]: I0202 17:17:30.219937 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500875-tdl29" event={"ID":"f3c72db6-4315-4210-9cfe-3c27b18e4abd","Type":"ContainerDied","Data":"2d2736426dc9b8cf377bc45320176e344e9a75b7b04efaf3a097fdd13f77bb21"} Feb 02 17:17:30 crc kubenswrapper[4858]: I0202 17:17:30.224425 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"e81f64109649928010958f0dfc07a7261432a90e1695624847f52cbf2f0a6eb0"} Feb 02 17:17:30 crc kubenswrapper[4858]: I0202 17:17:30.225130 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:17:30 crc kubenswrapper[4858]: I0202 17:17:30.239386 4858 generic.go:334] "Generic (PLEG): container finished" podID="7d87b6b7-9dbb-4ba9-9909-d9098b6f19b6" containerID="9d49b116c9fc26d00f59e5aabf613e6bba783b8477c058adb1d8f30cc622280d" exitCode=0 Feb 02 17:17:30 crc kubenswrapper[4858]: I0202 17:17:30.239459 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7d87b6b7-9dbb-4ba9-9909-d9098b6f19b6","Type":"ContainerDied","Data":"9d49b116c9fc26d00f59e5aabf613e6bba783b8477c058adb1d8f30cc622280d"} Feb 02 17:17:30 crc kubenswrapper[4858]: I0202 17:17:30.239488 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7d87b6b7-9dbb-4ba9-9909-d9098b6f19b6","Type":"ContainerStarted","Data":"2dc274cbe5a90ecbc6f6a2861a36dfc7a8832e1b26b6ce865104fdb6d0b3ae07"} Feb 02 17:17:30 crc kubenswrapper[4858]: I0202 17:17:30.243404 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"5dd589e5dc86e3949e1e523ed1478c770c38418bffce49b86cb5e4dd61b70286"} Feb 02 17:17:30 crc kubenswrapper[4858]: I0202 17:17:30.271018 4858 patch_prober.go:28] interesting pod/router-default-5444994796-frw2d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 17:17:30 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Feb 02 17:17:30 crc kubenswrapper[4858]: [+]process-running ok Feb 02 17:17:30 crc kubenswrapper[4858]: healthz check failed Feb 02 17:17:30 crc kubenswrapper[4858]: I0202 17:17:30.271078 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-frw2d" podUID="4af8047b-d906-4458-84e9-4cbefe269b59" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 17:17:30 crc kubenswrapper[4858]: I0202 17:17:30.369341 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w9kqv" Feb 02 17:17:30 crc kubenswrapper[4858]: I0202 17:17:30.436969 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 02 17:17:30 crc kubenswrapper[4858]: I0202 17:17:30.437584 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vmzkp"] Feb 02 17:17:30 crc kubenswrapper[4858]: I0202 17:17:30.580693 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j5zlt"] Feb 02 17:17:30 crc kubenswrapper[4858]: I0202 17:17:30.604354 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-c9zvf"] Feb 02 17:17:30 crc kubenswrapper[4858]: I0202 17:17:30.606090 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c9zvf" Feb 02 17:17:30 crc kubenswrapper[4858]: I0202 17:17:30.607419 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 02 17:17:30 crc kubenswrapper[4858]: I0202 17:17:30.614190 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c9zvf"] Feb 02 17:17:30 crc kubenswrapper[4858]: W0202 17:17:30.620730 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc022725c_9725_4d5c_a703_5d61c931d9e8.slice/crio-45bfd244eab3a347ecd468019c62f97040760531ab76bd38666c1a494cb3952c WatchSource:0}: Error finding container 45bfd244eab3a347ecd468019c62f97040760531ab76bd38666c1a494cb3952c: Status 404 returned error can't find the container with id 45bfd244eab3a347ecd468019c62f97040760531ab76bd38666c1a494cb3952c Feb 02 17:17:30 crc kubenswrapper[4858]: I0202 17:17:30.791732 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j989\" (UniqueName: \"kubernetes.io/projected/f1040c7c-84e3-41c7-9484-13022fbcef4b-kube-api-access-2j989\") pod \"redhat-operators-c9zvf\" (UID: \"f1040c7c-84e3-41c7-9484-13022fbcef4b\") " pod="openshift-marketplace/redhat-operators-c9zvf" Feb 02 17:17:30 crc kubenswrapper[4858]: I0202 17:17:30.791856 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1040c7c-84e3-41c7-9484-13022fbcef4b-catalog-content\") pod \"redhat-operators-c9zvf\" (UID: \"f1040c7c-84e3-41c7-9484-13022fbcef4b\") " pod="openshift-marketplace/redhat-operators-c9zvf" Feb 02 17:17:30 crc kubenswrapper[4858]: I0202 17:17:30.791910 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1040c7c-84e3-41c7-9484-13022fbcef4b-utilities\") pod \"redhat-operators-c9zvf\" (UID: \"f1040c7c-84e3-41c7-9484-13022fbcef4b\") " pod="openshift-marketplace/redhat-operators-c9zvf" Feb 02 17:17:30 crc kubenswrapper[4858]: I0202 17:17:30.882858 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w9kqv"] Feb 02 17:17:30 crc kubenswrapper[4858]: I0202 17:17:30.892821 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j989\" (UniqueName: \"kubernetes.io/projected/f1040c7c-84e3-41c7-9484-13022fbcef4b-kube-api-access-2j989\") pod \"redhat-operators-c9zvf\" (UID: \"f1040c7c-84e3-41c7-9484-13022fbcef4b\") " pod="openshift-marketplace/redhat-operators-c9zvf" Feb 02 17:17:30 crc kubenswrapper[4858]: I0202 17:17:30.892904 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1040c7c-84e3-41c7-9484-13022fbcef4b-catalog-content\") pod \"redhat-operators-c9zvf\" (UID: \"f1040c7c-84e3-41c7-9484-13022fbcef4b\") " pod="openshift-marketplace/redhat-operators-c9zvf" Feb 02 17:17:30 crc kubenswrapper[4858]: I0202 17:17:30.892942 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1040c7c-84e3-41c7-9484-13022fbcef4b-utilities\") pod \"redhat-operators-c9zvf\" (UID: \"f1040c7c-84e3-41c7-9484-13022fbcef4b\") " pod="openshift-marketplace/redhat-operators-c9zvf" Feb 02 17:17:30 crc kubenswrapper[4858]: I0202 17:17:30.905353 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1040c7c-84e3-41c7-9484-13022fbcef4b-catalog-content\") pod \"redhat-operators-c9zvf\" (UID: \"f1040c7c-84e3-41c7-9484-13022fbcef4b\") " pod="openshift-marketplace/redhat-operators-c9zvf" Feb 02 17:17:30 crc kubenswrapper[4858]: I0202 17:17:30.905423 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1040c7c-84e3-41c7-9484-13022fbcef4b-utilities\") pod \"redhat-operators-c9zvf\" (UID: \"f1040c7c-84e3-41c7-9484-13022fbcef4b\") " pod="openshift-marketplace/redhat-operators-c9zvf" Feb 02 17:17:30 crc kubenswrapper[4858]: I0202 17:17:30.909864 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j989\" (UniqueName: \"kubernetes.io/projected/f1040c7c-84e3-41c7-9484-13022fbcef4b-kube-api-access-2j989\") pod \"redhat-operators-c9zvf\" (UID: \"f1040c7c-84e3-41c7-9484-13022fbcef4b\") " pod="openshift-marketplace/redhat-operators-c9zvf" Feb 02 17:17:30 crc kubenswrapper[4858]: I0202 17:17:30.954616 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c9zvf" Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.016270 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5nj69"] Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.017458 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5nj69" Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.033116 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5nj69"] Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.096041 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9015cfbb-4091-4598-b5fd-007d2372a89e-utilities\") pod \"redhat-operators-5nj69\" (UID: \"9015cfbb-4091-4598-b5fd-007d2372a89e\") " pod="openshift-marketplace/redhat-operators-5nj69" Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.096090 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbpdw\" (UniqueName: \"kubernetes.io/projected/9015cfbb-4091-4598-b5fd-007d2372a89e-kube-api-access-sbpdw\") pod \"redhat-operators-5nj69\" (UID: \"9015cfbb-4091-4598-b5fd-007d2372a89e\") " pod="openshift-marketplace/redhat-operators-5nj69" Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.096216 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9015cfbb-4091-4598-b5fd-007d2372a89e-catalog-content\") pod \"redhat-operators-5nj69\" (UID: \"9015cfbb-4091-4598-b5fd-007d2372a89e\") " pod="openshift-marketplace/redhat-operators-5nj69" Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.198780 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9015cfbb-4091-4598-b5fd-007d2372a89e-catalog-content\") pod \"redhat-operators-5nj69\" (UID: \"9015cfbb-4091-4598-b5fd-007d2372a89e\") " pod="openshift-marketplace/redhat-operators-5nj69" Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.199175 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9015cfbb-4091-4598-b5fd-007d2372a89e-utilities\") pod \"redhat-operators-5nj69\" (UID: \"9015cfbb-4091-4598-b5fd-007d2372a89e\") " pod="openshift-marketplace/redhat-operators-5nj69" Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.199203 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbpdw\" (UniqueName: \"kubernetes.io/projected/9015cfbb-4091-4598-b5fd-007d2372a89e-kube-api-access-sbpdw\") pod \"redhat-operators-5nj69\" (UID: \"9015cfbb-4091-4598-b5fd-007d2372a89e\") " pod="openshift-marketplace/redhat-operators-5nj69" Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.199475 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9015cfbb-4091-4598-b5fd-007d2372a89e-catalog-content\") pod \"redhat-operators-5nj69\" (UID: \"9015cfbb-4091-4598-b5fd-007d2372a89e\") " pod="openshift-marketplace/redhat-operators-5nj69" Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.199568 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9015cfbb-4091-4598-b5fd-007d2372a89e-utilities\") pod \"redhat-operators-5nj69\" (UID: \"9015cfbb-4091-4598-b5fd-007d2372a89e\") " pod="openshift-marketplace/redhat-operators-5nj69" Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.216493 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbpdw\" (UniqueName: \"kubernetes.io/projected/9015cfbb-4091-4598-b5fd-007d2372a89e-kube-api-access-sbpdw\") pod \"redhat-operators-5nj69\" (UID: \"9015cfbb-4091-4598-b5fd-007d2372a89e\") " pod="openshift-marketplace/redhat-operators-5nj69" Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.216584 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-zww4k" Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.220868 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-zww4k" Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.222310 4858 patch_prober.go:28] interesting pod/console-f9d7485db-zww4k container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.222348 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-zww4k" podUID="84734edc-960c-4a16-9281-b10a1dc0a710" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.270396 4858 patch_prober.go:28] interesting pod/router-default-5444994796-frw2d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 17:17:31 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Feb 02 17:17:31 crc kubenswrapper[4858]: [+]process-running ok Feb 02 17:17:31 crc kubenswrapper[4858]: healthz check failed Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.270453 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-frw2d" podUID="4af8047b-d906-4458-84e9-4cbefe269b59" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.275086 4858 generic.go:334] "Generic (PLEG): container finished" podID="a32894ac-052e-4a93-a3d1-79aeec5b8869" containerID="49731b21b282f39e584bb545fe27b0c4a395a5f79914425e3e4df4065920c229" exitCode=0 Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.275230 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmzkp" event={"ID":"a32894ac-052e-4a93-a3d1-79aeec5b8869","Type":"ContainerDied","Data":"49731b21b282f39e584bb545fe27b0c4a395a5f79914425e3e4df4065920c229"} Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.275346 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmzkp" event={"ID":"a32894ac-052e-4a93-a3d1-79aeec5b8869","Type":"ContainerStarted","Data":"91169b3707147c80579305e556fb88fc2acf58b2a6355cc9184a56784856841d"} Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.295297 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w9kqv" event={"ID":"58698d7f-881d-44c8-8457-9595f4953b9f","Type":"ContainerStarted","Data":"f923701e0d18c12a700216f320130359c30b5efe9c183d33d4c24580d7b0e0a8"} Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.295345 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w9kqv" event={"ID":"58698d7f-881d-44c8-8457-9595f4953b9f","Type":"ContainerStarted","Data":"14b83c8542f684115929ade11c8b97b9199c96b1faee859e7a2e2b5e7030382f"} Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.331683 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" event={"ID":"c022725c-9725-4d5c-a703-5d61c931d9e8","Type":"ContainerStarted","Data":"b30ae748918a03768dc1fa47e8a2fdc80fb91d57ecb5e1002063b8621346e4d8"} Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.332043 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" event={"ID":"c022725c-9725-4d5c-a703-5d61c931d9e8","Type":"ContainerStarted","Data":"45bfd244eab3a347ecd468019c62f97040760531ab76bd38666c1a494cb3952c"} Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.337672 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5nj69" Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.359290 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" podStartSLOduration=131.35927026 podStartE2EDuration="2m11.35927026s" podCreationTimestamp="2026-02-02 17:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:17:31.352858481 +0000 UTC m=+152.505273756" watchObservedRunningTime="2026-02-02 17:17:31.35927026 +0000 UTC m=+152.511685525" Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.371661 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-7c9rp" Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.406620 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.412208 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-9wxc4" Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.440031 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lvvvj" Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.452822 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lvvvj" Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.472280 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9t6d4" Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.612476 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c9zvf"] Feb 02 17:17:31 crc kubenswrapper[4858]: W0202 17:17:31.620130 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1040c7c_84e3_41c7_9484_13022fbcef4b.slice/crio-b0911d89658e2a108503c219e4e892523fd3f339737112acd80093bb9cd61c12 WatchSource:0}: Error finding container b0911d89658e2a108503c219e4e892523fd3f339737112acd80093bb9cd61c12: Status 404 returned error can't find the container with id b0911d89658e2a108503c219e4e892523fd3f339737112acd80093bb9cd61c12 Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.663670 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.693743 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500875-tdl29" Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.811691 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7d87b6b7-9dbb-4ba9-9909-d9098b6f19b6-kube-api-access\") pod \"7d87b6b7-9dbb-4ba9-9909-d9098b6f19b6\" (UID: \"7d87b6b7-9dbb-4ba9-9909-d9098b6f19b6\") " Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.811738 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3c72db6-4315-4210-9cfe-3c27b18e4abd-config-volume\") pod \"f3c72db6-4315-4210-9cfe-3c27b18e4abd\" (UID: \"f3c72db6-4315-4210-9cfe-3c27b18e4abd\") " Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.811798 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pc8k\" (UniqueName: \"kubernetes.io/projected/f3c72db6-4315-4210-9cfe-3c27b18e4abd-kube-api-access-2pc8k\") pod \"f3c72db6-4315-4210-9cfe-3c27b18e4abd\" (UID: \"f3c72db6-4315-4210-9cfe-3c27b18e4abd\") " Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.811873 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f3c72db6-4315-4210-9cfe-3c27b18e4abd-secret-volume\") pod \"f3c72db6-4315-4210-9cfe-3c27b18e4abd\" (UID: \"f3c72db6-4315-4210-9cfe-3c27b18e4abd\") " Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.811909 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7d87b6b7-9dbb-4ba9-9909-d9098b6f19b6-kubelet-dir\") pod \"7d87b6b7-9dbb-4ba9-9909-d9098b6f19b6\" (UID: \"7d87b6b7-9dbb-4ba9-9909-d9098b6f19b6\") " Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.812138 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d87b6b7-9dbb-4ba9-9909-d9098b6f19b6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7d87b6b7-9dbb-4ba9-9909-d9098b6f19b6" (UID: "7d87b6b7-9dbb-4ba9-9909-d9098b6f19b6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.812722 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3c72db6-4315-4210-9cfe-3c27b18e4abd-config-volume" (OuterVolumeSpecName: "config-volume") pod "f3c72db6-4315-4210-9cfe-3c27b18e4abd" (UID: "f3c72db6-4315-4210-9cfe-3c27b18e4abd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.820145 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3c72db6-4315-4210-9cfe-3c27b18e4abd-kube-api-access-2pc8k" (OuterVolumeSpecName: "kube-api-access-2pc8k") pod "f3c72db6-4315-4210-9cfe-3c27b18e4abd" (UID: "f3c72db6-4315-4210-9cfe-3c27b18e4abd"). InnerVolumeSpecName "kube-api-access-2pc8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.820802 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d87b6b7-9dbb-4ba9-9909-d9098b6f19b6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7d87b6b7-9dbb-4ba9-9909-d9098b6f19b6" (UID: "7d87b6b7-9dbb-4ba9-9909-d9098b6f19b6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.821134 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3c72db6-4315-4210-9cfe-3c27b18e4abd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f3c72db6-4315-4210-9cfe-3c27b18e4abd" (UID: "f3c72db6-4315-4210-9cfe-3c27b18e4abd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.913290 4858 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7d87b6b7-9dbb-4ba9-9909-d9098b6f19b6-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.913320 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7d87b6b7-9dbb-4ba9-9909-d9098b6f19b6-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.913334 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3c72db6-4315-4210-9cfe-3c27b18e4abd-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.913344 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pc8k\" (UniqueName: \"kubernetes.io/projected/f3c72db6-4315-4210-9cfe-3c27b18e4abd-kube-api-access-2pc8k\") on node \"crc\" DevicePath \"\"" Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.913354 4858 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f3c72db6-4315-4210-9cfe-3c27b18e4abd-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 17:17:31 crc kubenswrapper[4858]: I0202 17:17:31.999279 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5nj69"] Feb 02 17:17:32 crc kubenswrapper[4858]: I0202 17:17:32.269059 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-frw2d" Feb 02 17:17:32 crc kubenswrapper[4858]: I0202 17:17:32.271597 4858 patch_prober.go:28] interesting pod/router-default-5444994796-frw2d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 17:17:32 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Feb 02 17:17:32 crc kubenswrapper[4858]: [+]process-running ok Feb 02 17:17:32 crc kubenswrapper[4858]: healthz check failed Feb 02 17:17:32 crc kubenswrapper[4858]: I0202 17:17:32.271632 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-frw2d" podUID="4af8047b-d906-4458-84e9-4cbefe269b59" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 17:17:32 crc kubenswrapper[4858]: I0202 17:17:32.305034 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-rnrz8" Feb 02 17:17:32 crc kubenswrapper[4858]: I0202 17:17:32.383457 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nj69" event={"ID":"9015cfbb-4091-4598-b5fd-007d2372a89e","Type":"ContainerStarted","Data":"fd89ad2ac9e27dcd4e335970b747f3016bdd1b7a6fa8a74899c0062479dca989"} Feb 02 17:17:32 crc kubenswrapper[4858]: I0202 17:17:32.393188 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-nd9bb" Feb 02 17:17:32 crc kubenswrapper[4858]: I0202 17:17:32.394235 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500875-tdl29" event={"ID":"f3c72db6-4315-4210-9cfe-3c27b18e4abd","Type":"ContainerDied","Data":"070dc8ddd33570bc590a1dac8c59ba6f0c7b38d01be2abf405a67c12b78589b4"} Feb 02 17:17:32 crc kubenswrapper[4858]: I0202 17:17:32.394266 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="070dc8ddd33570bc590a1dac8c59ba6f0c7b38d01be2abf405a67c12b78589b4" Feb 02 17:17:32 crc kubenswrapper[4858]: I0202 17:17:32.394322 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500875-tdl29" Feb 02 17:17:32 crc kubenswrapper[4858]: I0202 17:17:32.398564 4858 generic.go:334] "Generic (PLEG): container finished" podID="f1040c7c-84e3-41c7-9484-13022fbcef4b" containerID="75b856138fabfbf84faab7daa1e48276bd7ee52850621b24c8bb579c549a6fb4" exitCode=0 Feb 02 17:17:32 crc kubenswrapper[4858]: I0202 17:17:32.398629 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c9zvf" event={"ID":"f1040c7c-84e3-41c7-9484-13022fbcef4b","Type":"ContainerDied","Data":"75b856138fabfbf84faab7daa1e48276bd7ee52850621b24c8bb579c549a6fb4"} Feb 02 17:17:32 crc kubenswrapper[4858]: I0202 17:17:32.398656 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c9zvf" event={"ID":"f1040c7c-84e3-41c7-9484-13022fbcef4b","Type":"ContainerStarted","Data":"b0911d89658e2a108503c219e4e892523fd3f339737112acd80093bb9cd61c12"} Feb 02 17:17:32 crc kubenswrapper[4858]: I0202 17:17:32.411070 4858 generic.go:334] "Generic (PLEG): container finished" podID="58698d7f-881d-44c8-8457-9595f4953b9f" containerID="f923701e0d18c12a700216f320130359c30b5efe9c183d33d4c24580d7b0e0a8" exitCode=0 Feb 02 17:17:32 crc kubenswrapper[4858]: I0202 17:17:32.479526 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w9kqv" event={"ID":"58698d7f-881d-44c8-8457-9595f4953b9f","Type":"ContainerDied","Data":"f923701e0d18c12a700216f320130359c30b5efe9c183d33d4c24580d7b0e0a8"} Feb 02 17:17:32 crc kubenswrapper[4858]: I0202 17:17:32.479562 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 17:17:32 crc kubenswrapper[4858]: I0202 17:17:32.479595 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7d87b6b7-9dbb-4ba9-9909-d9098b6f19b6","Type":"ContainerDied","Data":"2dc274cbe5a90ecbc6f6a2861a36dfc7a8832e1b26b6ce865104fdb6d0b3ae07"} Feb 02 17:17:32 crc kubenswrapper[4858]: I0202 17:17:32.479614 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dc274cbe5a90ecbc6f6a2861a36dfc7a8832e1b26b6ce865104fdb6d0b3ae07" Feb 02 17:17:32 crc kubenswrapper[4858]: I0202 17:17:32.480579 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:32 crc kubenswrapper[4858]: I0202 17:17:32.486525 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-ffh76 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 02 17:17:32 crc kubenswrapper[4858]: I0202 17:17:32.486595 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ffh76" podUID="fd740a32-9003-4d27-8c7b-3423717fd9bf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 02 17:17:32 crc kubenswrapper[4858]: I0202 17:17:32.487038 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-ffh76 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 02 17:17:32 crc kubenswrapper[4858]: I0202 17:17:32.487089 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-ffh76" podUID="fd740a32-9003-4d27-8c7b-3423717fd9bf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 02 17:17:32 crc kubenswrapper[4858]: I0202 17:17:32.748875 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 02 17:17:32 crc kubenswrapper[4858]: E0202 17:17:32.749067 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3c72db6-4315-4210-9cfe-3c27b18e4abd" containerName="collect-profiles" Feb 02 17:17:32 crc kubenswrapper[4858]: I0202 17:17:32.749078 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3c72db6-4315-4210-9cfe-3c27b18e4abd" containerName="collect-profiles" Feb 02 17:17:32 crc kubenswrapper[4858]: E0202 17:17:32.749098 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d87b6b7-9dbb-4ba9-9909-d9098b6f19b6" containerName="pruner" Feb 02 17:17:32 crc kubenswrapper[4858]: I0202 17:17:32.749106 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d87b6b7-9dbb-4ba9-9909-d9098b6f19b6" containerName="pruner" Feb 02 17:17:32 crc kubenswrapper[4858]: I0202 17:17:32.749205 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d87b6b7-9dbb-4ba9-9909-d9098b6f19b6" containerName="pruner" Feb 02 17:17:32 crc kubenswrapper[4858]: I0202 17:17:32.749423 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3c72db6-4315-4210-9cfe-3c27b18e4abd" containerName="collect-profiles" Feb 02 17:17:32 crc kubenswrapper[4858]: I0202 17:17:32.749745 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 17:17:32 crc kubenswrapper[4858]: I0202 17:17:32.751891 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 02 17:17:32 crc kubenswrapper[4858]: I0202 17:17:32.753044 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 02 17:17:32 crc kubenswrapper[4858]: I0202 17:17:32.775060 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 02 17:17:32 crc kubenswrapper[4858]: I0202 17:17:32.942584 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f2127439-d73b-439f-9644-3ea7068956c2-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"f2127439-d73b-439f-9644-3ea7068956c2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 17:17:32 crc kubenswrapper[4858]: I0202 17:17:32.943203 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2127439-d73b-439f-9644-3ea7068956c2-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"f2127439-d73b-439f-9644-3ea7068956c2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 17:17:33 crc kubenswrapper[4858]: I0202 17:17:33.049386 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f2127439-d73b-439f-9644-3ea7068956c2-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"f2127439-d73b-439f-9644-3ea7068956c2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 17:17:33 crc kubenswrapper[4858]: I0202 17:17:33.049459 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2127439-d73b-439f-9644-3ea7068956c2-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"f2127439-d73b-439f-9644-3ea7068956c2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 17:17:33 crc kubenswrapper[4858]: I0202 17:17:33.049504 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f2127439-d73b-439f-9644-3ea7068956c2-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"f2127439-d73b-439f-9644-3ea7068956c2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 17:17:33 crc kubenswrapper[4858]: I0202 17:17:33.093140 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2127439-d73b-439f-9644-3ea7068956c2-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"f2127439-d73b-439f-9644-3ea7068956c2\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 17:17:33 crc kubenswrapper[4858]: I0202 17:17:33.269894 4858 patch_prober.go:28] interesting pod/router-default-5444994796-frw2d container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 17:17:33 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Feb 02 17:17:33 crc kubenswrapper[4858]: [+]process-running ok Feb 02 17:17:33 crc kubenswrapper[4858]: healthz check failed Feb 02 17:17:33 crc kubenswrapper[4858]: I0202 17:17:33.271098 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-frw2d" podUID="4af8047b-d906-4458-84e9-4cbefe269b59" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 17:17:33 crc kubenswrapper[4858]: I0202 17:17:33.376229 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 17:17:33 crc kubenswrapper[4858]: I0202 17:17:33.542373 4858 generic.go:334] "Generic (PLEG): container finished" podID="9015cfbb-4091-4598-b5fd-007d2372a89e" containerID="23512c88d51369a11d0099a8ad9d66f399282c3cf0509161b2e04b8333a4ecf8" exitCode=0 Feb 02 17:17:33 crc kubenswrapper[4858]: I0202 17:17:33.542584 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nj69" event={"ID":"9015cfbb-4091-4598-b5fd-007d2372a89e","Type":"ContainerDied","Data":"23512c88d51369a11d0099a8ad9d66f399282c3cf0509161b2e04b8333a4ecf8"} Feb 02 17:17:33 crc kubenswrapper[4858]: I0202 17:17:33.911343 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 02 17:17:34 crc kubenswrapper[4858]: I0202 17:17:34.272696 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-frw2d" Feb 02 17:17:34 crc kubenswrapper[4858]: I0202 17:17:34.279685 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-frw2d" Feb 02 17:17:34 crc kubenswrapper[4858]: I0202 17:17:34.557429 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f2127439-d73b-439f-9644-3ea7068956c2","Type":"ContainerStarted","Data":"e70fc6e71875823af1e6b744902efb57e166041eaf05e1567021e42f54ad3127"} Feb 02 17:17:35 crc kubenswrapper[4858]: I0202 17:17:35.568900 4858 generic.go:334] "Generic (PLEG): container finished" podID="f2127439-d73b-439f-9644-3ea7068956c2" containerID="b3008e51679ab1f7f903f327e37668e352f96fd9c0cea7c90fa1b49945a612b1" exitCode=0 Feb 02 17:17:35 crc kubenswrapper[4858]: I0202 17:17:35.568946 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f2127439-d73b-439f-9644-3ea7068956c2","Type":"ContainerDied","Data":"b3008e51679ab1f7f903f327e37668e352f96fd9c0cea7c90fa1b49945a612b1"} Feb 02 17:17:36 crc kubenswrapper[4858]: I0202 17:17:36.904814 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 17:17:37 crc kubenswrapper[4858]: I0202 17:17:37.021037 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f2127439-d73b-439f-9644-3ea7068956c2-kubelet-dir\") pod \"f2127439-d73b-439f-9644-3ea7068956c2\" (UID: \"f2127439-d73b-439f-9644-3ea7068956c2\") " Feb 02 17:17:37 crc kubenswrapper[4858]: I0202 17:17:37.021295 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2127439-d73b-439f-9644-3ea7068956c2-kube-api-access\") pod \"f2127439-d73b-439f-9644-3ea7068956c2\" (UID: \"f2127439-d73b-439f-9644-3ea7068956c2\") " Feb 02 17:17:37 crc kubenswrapper[4858]: I0202 17:17:37.021139 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2127439-d73b-439f-9644-3ea7068956c2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f2127439-d73b-439f-9644-3ea7068956c2" (UID: "f2127439-d73b-439f-9644-3ea7068956c2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 17:17:37 crc kubenswrapper[4858]: I0202 17:17:37.021844 4858 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f2127439-d73b-439f-9644-3ea7068956c2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 17:17:37 crc kubenswrapper[4858]: I0202 17:17:37.048924 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2127439-d73b-439f-9644-3ea7068956c2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f2127439-d73b-439f-9644-3ea7068956c2" (UID: "f2127439-d73b-439f-9644-3ea7068956c2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:17:37 crc kubenswrapper[4858]: I0202 17:17:37.126097 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2127439-d73b-439f-9644-3ea7068956c2-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 17:17:37 crc kubenswrapper[4858]: I0202 17:17:37.430339 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-q5rqg" Feb 02 17:17:37 crc kubenswrapper[4858]: I0202 17:17:37.596106 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f2127439-d73b-439f-9644-3ea7068956c2","Type":"ContainerDied","Data":"e70fc6e71875823af1e6b744902efb57e166041eaf05e1567021e42f54ad3127"} Feb 02 17:17:37 crc kubenswrapper[4858]: I0202 17:17:37.596147 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e70fc6e71875823af1e6b744902efb57e166041eaf05e1567021e42f54ad3127" Feb 02 17:17:37 crc kubenswrapper[4858]: I0202 17:17:37.596206 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 17:17:41 crc kubenswrapper[4858]: I0202 17:17:41.224343 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-zww4k" Feb 02 17:17:41 crc kubenswrapper[4858]: I0202 17:17:41.230003 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-zww4k" Feb 02 17:17:42 crc kubenswrapper[4858]: I0202 17:17:42.486382 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-ffh76 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 02 17:17:42 crc kubenswrapper[4858]: I0202 17:17:42.486734 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ffh76" podUID="fd740a32-9003-4d27-8c7b-3423717fd9bf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 02 17:17:42 crc kubenswrapper[4858]: I0202 17:17:42.486382 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-ffh76 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 02 17:17:42 crc kubenswrapper[4858]: I0202 17:17:42.487167 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-ffh76" podUID="fd740a32-9003-4d27-8c7b-3423717fd9bf" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 02 17:17:42 crc kubenswrapper[4858]: I0202 17:17:42.710202 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122-metrics-certs\") pod \"network-metrics-daemon-t8jfm\" (UID: \"8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122\") " pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:17:42 crc kubenswrapper[4858]: I0202 17:17:42.728824 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122-metrics-certs\") pod \"network-metrics-daemon-t8jfm\" (UID: \"8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122\") " pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:17:42 crc kubenswrapper[4858]: I0202 17:17:42.912351 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-t8jfm" Feb 02 17:17:44 crc kubenswrapper[4858]: I0202 17:17:44.998310 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-t8jfm"] Feb 02 17:17:45 crc kubenswrapper[4858]: I0202 17:17:45.654796 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-t8jfm" event={"ID":"8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122","Type":"ContainerStarted","Data":"4222b9137c7a08a5fe9245505d479246979383d3c5a98583e74e7ac530540008"} Feb 02 17:17:46 crc kubenswrapper[4858]: I0202 17:17:46.676008 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-t8jfm" event={"ID":"8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122","Type":"ContainerStarted","Data":"065b818f4ba1d29cdeb8c663735321adb2f3c8fe599061e64d6f1a3e096b7759"} Feb 02 17:17:48 crc kubenswrapper[4858]: I0202 17:17:48.686625 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-t8jfm" event={"ID":"8b6ba2bd-f55a-4fe4-b7fc-7d6c4c7ef122","Type":"ContainerStarted","Data":"f2183c4cc83c5b8ee27859c096a78c3e4f0c56c2f2f2c1ff5f2c45dcb77e71f9"} Feb 02 17:17:49 crc kubenswrapper[4858]: I0202 17:17:49.198050 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7c9rp"] Feb 02 17:17:49 crc kubenswrapper[4858]: I0202 17:17:49.198327 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-7c9rp" podUID="0b36dd0a-26c9-4d5f-ae02-aa432af223ad" containerName="controller-manager" containerID="cri-o://6bd65021832fb1ebf57569393e6c2a2b4138dcc0981daf10cdf45f8be31dd190" gracePeriod=30 Feb 02 17:17:49 crc kubenswrapper[4858]: I0202 17:17:49.215235 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9t6d4"] Feb 02 17:17:49 crc kubenswrapper[4858]: I0202 17:17:49.215422 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9t6d4" podUID="b330afef-9be2-4944-b014-0b6b2478316d" containerName="route-controller-manager" containerID="cri-o://48ab0488dc16a2a771cc7a688671e5a1fd75e4b017b33daaa9d0bef5570d57cc" gracePeriod=30 Feb 02 17:17:49 crc kubenswrapper[4858]: I0202 17:17:49.695176 4858 generic.go:334] "Generic (PLEG): container finished" podID="0b36dd0a-26c9-4d5f-ae02-aa432af223ad" containerID="6bd65021832fb1ebf57569393e6c2a2b4138dcc0981daf10cdf45f8be31dd190" exitCode=0 Feb 02 17:17:49 crc kubenswrapper[4858]: I0202 17:17:49.695244 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-7c9rp" event={"ID":"0b36dd0a-26c9-4d5f-ae02-aa432af223ad","Type":"ContainerDied","Data":"6bd65021832fb1ebf57569393e6c2a2b4138dcc0981daf10cdf45f8be31dd190"} Feb 02 17:17:49 crc kubenswrapper[4858]: I0202 17:17:49.696549 4858 generic.go:334] "Generic (PLEG): container finished" podID="b330afef-9be2-4944-b014-0b6b2478316d" containerID="48ab0488dc16a2a771cc7a688671e5a1fd75e4b017b33daaa9d0bef5570d57cc" exitCode=0 Feb 02 17:17:49 crc kubenswrapper[4858]: I0202 17:17:49.697750 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9t6d4" event={"ID":"b330afef-9be2-4944-b014-0b6b2478316d","Type":"ContainerDied","Data":"48ab0488dc16a2a771cc7a688671e5a1fd75e4b017b33daaa9d0bef5570d57cc"} Feb 02 17:17:49 crc kubenswrapper[4858]: I0202 17:17:49.721955 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-t8jfm" podStartSLOduration=149.721933877 podStartE2EDuration="2m29.721933877s" podCreationTimestamp="2026-02-02 17:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:17:49.716635991 +0000 UTC m=+170.869051256" watchObservedRunningTime="2026-02-02 17:17:49.721933877 +0000 UTC m=+170.874349132" Feb 02 17:17:49 crc kubenswrapper[4858]: I0202 17:17:49.977542 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:17:51 crc kubenswrapper[4858]: I0202 17:17:51.362558 4858 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-7c9rp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 02 17:17:51 crc kubenswrapper[4858]: I0202 17:17:51.362646 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-7c9rp" podUID="0b36dd0a-26c9-4d5f-ae02-aa432af223ad" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 02 17:17:51 crc kubenswrapper[4858]: I0202 17:17:51.458888 4858 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-9t6d4 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 02 17:17:51 crc kubenswrapper[4858]: I0202 17:17:51.458961 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9t6d4" podUID="b330afef-9be2-4944-b014-0b6b2478316d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 02 17:17:52 crc kubenswrapper[4858]: I0202 17:17:52.533137 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-ffh76" Feb 02 17:17:57 crc kubenswrapper[4858]: I0202 17:17:57.807516 4858 patch_prober.go:28] interesting pod/machine-config-daemon-lbvl2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 17:17:57 crc kubenswrapper[4858]: I0202 17:17:57.807854 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 17:17:59 crc kubenswrapper[4858]: E0202 17:17:59.465103 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 02 17:17:59 crc kubenswrapper[4858]: E0202 17:17:59.465487 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sbpdw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-5nj69_openshift-marketplace(9015cfbb-4091-4598-b5fd-007d2372a89e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 17:17:59 crc kubenswrapper[4858]: E0202 17:17:59.466691 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-5nj69" podUID="9015cfbb-4091-4598-b5fd-007d2372a89e" Feb 02 17:18:00 crc kubenswrapper[4858]: E0202 17:18:00.618393 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-5nj69" podUID="9015cfbb-4091-4598-b5fd-007d2372a89e" Feb 02 17:18:00 crc kubenswrapper[4858]: E0202 17:18:00.698709 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 02 17:18:00 crc kubenswrapper[4858]: E0202 17:18:00.699163 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7dmmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-7x482_openshift-marketplace(69eb2d24-ee9f-4ef2-8bf0-233099196e0d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 17:18:00 crc kubenswrapper[4858]: E0202 17:18:00.700238 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-7x482" podUID="69eb2d24-ee9f-4ef2-8bf0-233099196e0d" Feb 02 17:18:00 crc kubenswrapper[4858]: I0202 17:18:00.701431 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-7c9rp" Feb 02 17:18:00 crc kubenswrapper[4858]: E0202 17:18:00.722182 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 02 17:18:00 crc kubenswrapper[4858]: E0202 17:18:00.722322 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dgg8w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-w9kqv_openshift-marketplace(58698d7f-881d-44c8-8457-9595f4953b9f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 17:18:00 crc kubenswrapper[4858]: I0202 17:18:00.723308 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-9d97c9b9-wwdhp"] Feb 02 17:18:00 crc kubenswrapper[4858]: E0202 17:18:00.735594 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-w9kqv" podUID="58698d7f-881d-44c8-8457-9595f4953b9f" Feb 02 17:18:00 crc kubenswrapper[4858]: E0202 17:18:00.735724 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2127439-d73b-439f-9644-3ea7068956c2" containerName="pruner" Feb 02 17:18:00 crc kubenswrapper[4858]: I0202 17:18:00.735764 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2127439-d73b-439f-9644-3ea7068956c2" containerName="pruner" Feb 02 17:18:00 crc kubenswrapper[4858]: E0202 17:18:00.735793 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b36dd0a-26c9-4d5f-ae02-aa432af223ad" containerName="controller-manager" Feb 02 17:18:00 crc kubenswrapper[4858]: I0202 17:18:00.735802 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b36dd0a-26c9-4d5f-ae02-aa432af223ad" containerName="controller-manager" Feb 02 17:18:00 crc kubenswrapper[4858]: I0202 17:18:00.736522 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2127439-d73b-439f-9644-3ea7068956c2" containerName="pruner" Feb 02 17:18:00 crc kubenswrapper[4858]: I0202 17:18:00.736560 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b36dd0a-26c9-4d5f-ae02-aa432af223ad" containerName="controller-manager" Feb 02 17:18:00 crc kubenswrapper[4858]: I0202 17:18:00.739406 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9d97c9b9-wwdhp" Feb 02 17:18:00 crc kubenswrapper[4858]: I0202 17:18:00.762673 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9d97c9b9-wwdhp"] Feb 02 17:18:00 crc kubenswrapper[4858]: I0202 17:18:00.783463 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-7c9rp" Feb 02 17:18:00 crc kubenswrapper[4858]: I0202 17:18:00.783576 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-7c9rp" event={"ID":"0b36dd0a-26c9-4d5f-ae02-aa432af223ad","Type":"ContainerDied","Data":"0da56fd2ac3c9be5ee51ddb757f79ef3e8827a6116379ccaefbec418b3d0dd18"} Feb 02 17:18:00 crc kubenswrapper[4858]: I0202 17:18:00.783607 4858 scope.go:117] "RemoveContainer" containerID="6bd65021832fb1ebf57569393e6c2a2b4138dcc0981daf10cdf45f8be31dd190" Feb 02 17:18:00 crc kubenswrapper[4858]: E0202 17:18:00.787316 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 02 17:18:00 crc kubenswrapper[4858]: E0202 17:18:00.787420 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrhl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-vqhch_openshift-marketplace(2bd40f69-131b-4d0c-87d9-bfae63f9a4eb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 17:18:00 crc kubenswrapper[4858]: E0202 17:18:00.788682 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-vqhch" podUID="2bd40f69-131b-4d0c-87d9-bfae63f9a4eb" Feb 02 17:18:00 crc kubenswrapper[4858]: E0202 17:18:00.810306 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-7x482" podUID="69eb2d24-ee9f-4ef2-8bf0-233099196e0d" Feb 02 17:18:00 crc kubenswrapper[4858]: E0202 17:18:00.813298 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-w9kqv" podUID="58698d7f-881d-44c8-8457-9595f4953b9f" Feb 02 17:18:00 crc kubenswrapper[4858]: I0202 17:18:00.838192 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b36dd0a-26c9-4d5f-ae02-aa432af223ad-config\") pod \"0b36dd0a-26c9-4d5f-ae02-aa432af223ad\" (UID: \"0b36dd0a-26c9-4d5f-ae02-aa432af223ad\") " Feb 02 17:18:00 crc kubenswrapper[4858]: I0202 17:18:00.838266 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b36dd0a-26c9-4d5f-ae02-aa432af223ad-client-ca\") pod \"0b36dd0a-26c9-4d5f-ae02-aa432af223ad\" (UID: \"0b36dd0a-26c9-4d5f-ae02-aa432af223ad\") " Feb 02 17:18:00 crc kubenswrapper[4858]: I0202 17:18:00.838318 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b36dd0a-26c9-4d5f-ae02-aa432af223ad-serving-cert\") pod \"0b36dd0a-26c9-4d5f-ae02-aa432af223ad\" (UID: \"0b36dd0a-26c9-4d5f-ae02-aa432af223ad\") " Feb 02 17:18:00 crc kubenswrapper[4858]: I0202 17:18:00.838356 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47dsd\" (UniqueName: \"kubernetes.io/projected/0b36dd0a-26c9-4d5f-ae02-aa432af223ad-kube-api-access-47dsd\") pod \"0b36dd0a-26c9-4d5f-ae02-aa432af223ad\" (UID: \"0b36dd0a-26c9-4d5f-ae02-aa432af223ad\") " Feb 02 17:18:00 crc kubenswrapper[4858]: I0202 17:18:00.838462 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0b36dd0a-26c9-4d5f-ae02-aa432af223ad-proxy-ca-bundles\") pod \"0b36dd0a-26c9-4d5f-ae02-aa432af223ad\" (UID: \"0b36dd0a-26c9-4d5f-ae02-aa432af223ad\") " Feb 02 17:18:00 crc kubenswrapper[4858]: I0202 17:18:00.838660 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf652e77-7bda-4fc5-afde-f36ca4d94feb-config\") pod \"controller-manager-9d97c9b9-wwdhp\" (UID: \"bf652e77-7bda-4fc5-afde-f36ca4d94feb\") " pod="openshift-controller-manager/controller-manager-9d97c9b9-wwdhp" Feb 02 17:18:00 crc kubenswrapper[4858]: I0202 17:18:00.838703 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bf652e77-7bda-4fc5-afde-f36ca4d94feb-proxy-ca-bundles\") pod \"controller-manager-9d97c9b9-wwdhp\" (UID: \"bf652e77-7bda-4fc5-afde-f36ca4d94feb\") " pod="openshift-controller-manager/controller-manager-9d97c9b9-wwdhp" Feb 02 17:18:00 crc kubenswrapper[4858]: I0202 17:18:00.838741 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4vjd\" (UniqueName: \"kubernetes.io/projected/bf652e77-7bda-4fc5-afde-f36ca4d94feb-kube-api-access-k4vjd\") pod \"controller-manager-9d97c9b9-wwdhp\" (UID: \"bf652e77-7bda-4fc5-afde-f36ca4d94feb\") " pod="openshift-controller-manager/controller-manager-9d97c9b9-wwdhp" Feb 02 17:18:00 crc kubenswrapper[4858]: I0202 17:18:00.838796 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf652e77-7bda-4fc5-afde-f36ca4d94feb-serving-cert\") pod \"controller-manager-9d97c9b9-wwdhp\" (UID: \"bf652e77-7bda-4fc5-afde-f36ca4d94feb\") " pod="openshift-controller-manager/controller-manager-9d97c9b9-wwdhp" Feb 02 17:18:00 crc kubenswrapper[4858]: I0202 17:18:00.838819 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf652e77-7bda-4fc5-afde-f36ca4d94feb-client-ca\") pod \"controller-manager-9d97c9b9-wwdhp\" (UID: \"bf652e77-7bda-4fc5-afde-f36ca4d94feb\") " pod="openshift-controller-manager/controller-manager-9d97c9b9-wwdhp" Feb 02 17:18:00 crc kubenswrapper[4858]: I0202 17:18:00.839690 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b36dd0a-26c9-4d5f-ae02-aa432af223ad-client-ca" (OuterVolumeSpecName: "client-ca") pod "0b36dd0a-26c9-4d5f-ae02-aa432af223ad" (UID: "0b36dd0a-26c9-4d5f-ae02-aa432af223ad"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:18:00 crc kubenswrapper[4858]: I0202 17:18:00.839713 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b36dd0a-26c9-4d5f-ae02-aa432af223ad-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0b36dd0a-26c9-4d5f-ae02-aa432af223ad" (UID: "0b36dd0a-26c9-4d5f-ae02-aa432af223ad"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:18:00 crc kubenswrapper[4858]: I0202 17:18:00.840409 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b36dd0a-26c9-4d5f-ae02-aa432af223ad-config" (OuterVolumeSpecName: "config") pod "0b36dd0a-26c9-4d5f-ae02-aa432af223ad" (UID: "0b36dd0a-26c9-4d5f-ae02-aa432af223ad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:18:00 crc kubenswrapper[4858]: I0202 17:18:00.848002 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b36dd0a-26c9-4d5f-ae02-aa432af223ad-kube-api-access-47dsd" (OuterVolumeSpecName: "kube-api-access-47dsd") pod "0b36dd0a-26c9-4d5f-ae02-aa432af223ad" (UID: "0b36dd0a-26c9-4d5f-ae02-aa432af223ad"). InnerVolumeSpecName "kube-api-access-47dsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:18:00 crc kubenswrapper[4858]: I0202 17:18:00.848136 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b36dd0a-26c9-4d5f-ae02-aa432af223ad-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b36dd0a-26c9-4d5f-ae02-aa432af223ad" (UID: "0b36dd0a-26c9-4d5f-ae02-aa432af223ad"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:18:00 crc kubenswrapper[4858]: I0202 17:18:00.850487 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9t6d4" Feb 02 17:18:00 crc kubenswrapper[4858]: I0202 17:18:00.939863 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf652e77-7bda-4fc5-afde-f36ca4d94feb-config\") pod \"controller-manager-9d97c9b9-wwdhp\" (UID: \"bf652e77-7bda-4fc5-afde-f36ca4d94feb\") " pod="openshift-controller-manager/controller-manager-9d97c9b9-wwdhp" Feb 02 17:18:00 crc kubenswrapper[4858]: I0202 17:18:00.939930 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bf652e77-7bda-4fc5-afde-f36ca4d94feb-proxy-ca-bundles\") pod \"controller-manager-9d97c9b9-wwdhp\" (UID: \"bf652e77-7bda-4fc5-afde-f36ca4d94feb\") " pod="openshift-controller-manager/controller-manager-9d97c9b9-wwdhp" Feb 02 17:18:00 crc kubenswrapper[4858]: I0202 17:18:00.939965 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4vjd\" (UniqueName: \"kubernetes.io/projected/bf652e77-7bda-4fc5-afde-f36ca4d94feb-kube-api-access-k4vjd\") pod \"controller-manager-9d97c9b9-wwdhp\" (UID: \"bf652e77-7bda-4fc5-afde-f36ca4d94feb\") " pod="openshift-controller-manager/controller-manager-9d97c9b9-wwdhp" Feb 02 17:18:00 crc kubenswrapper[4858]: I0202 17:18:00.940047 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf652e77-7bda-4fc5-afde-f36ca4d94feb-serving-cert\") pod \"controller-manager-9d97c9b9-wwdhp\" (UID: \"bf652e77-7bda-4fc5-afde-f36ca4d94feb\") " pod="openshift-controller-manager/controller-manager-9d97c9b9-wwdhp" Feb 02 17:18:00 crc kubenswrapper[4858]: I0202 17:18:00.940071 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf652e77-7bda-4fc5-afde-f36ca4d94feb-client-ca\") pod \"controller-manager-9d97c9b9-wwdhp\" (UID: \"bf652e77-7bda-4fc5-afde-f36ca4d94feb\") " pod="openshift-controller-manager/controller-manager-9d97c9b9-wwdhp" Feb 02 17:18:00 crc kubenswrapper[4858]: I0202 17:18:00.940111 4858 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0b36dd0a-26c9-4d5f-ae02-aa432af223ad-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:00 crc kubenswrapper[4858]: I0202 17:18:00.940124 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b36dd0a-26c9-4d5f-ae02-aa432af223ad-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:00 crc kubenswrapper[4858]: I0202 17:18:00.940136 4858 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b36dd0a-26c9-4d5f-ae02-aa432af223ad-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:00 crc kubenswrapper[4858]: I0202 17:18:00.940148 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b36dd0a-26c9-4d5f-ae02-aa432af223ad-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:00 crc kubenswrapper[4858]: I0202 17:18:00.940158 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47dsd\" (UniqueName: \"kubernetes.io/projected/0b36dd0a-26c9-4d5f-ae02-aa432af223ad-kube-api-access-47dsd\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:00 crc kubenswrapper[4858]: I0202 17:18:00.941113 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf652e77-7bda-4fc5-afde-f36ca4d94feb-client-ca\") pod \"controller-manager-9d97c9b9-wwdhp\" (UID: \"bf652e77-7bda-4fc5-afde-f36ca4d94feb\") " pod="openshift-controller-manager/controller-manager-9d97c9b9-wwdhp" Feb 02 17:18:00 crc kubenswrapper[4858]: I0202 17:18:00.942854 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bf652e77-7bda-4fc5-afde-f36ca4d94feb-proxy-ca-bundles\") pod \"controller-manager-9d97c9b9-wwdhp\" (UID: \"bf652e77-7bda-4fc5-afde-f36ca4d94feb\") " pod="openshift-controller-manager/controller-manager-9d97c9b9-wwdhp" Feb 02 17:18:00 crc kubenswrapper[4858]: I0202 17:18:00.942880 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf652e77-7bda-4fc5-afde-f36ca4d94feb-config\") pod \"controller-manager-9d97c9b9-wwdhp\" (UID: \"bf652e77-7bda-4fc5-afde-f36ca4d94feb\") " pod="openshift-controller-manager/controller-manager-9d97c9b9-wwdhp" Feb 02 17:18:00 crc kubenswrapper[4858]: I0202 17:18:00.948556 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf652e77-7bda-4fc5-afde-f36ca4d94feb-serving-cert\") pod \"controller-manager-9d97c9b9-wwdhp\" (UID: \"bf652e77-7bda-4fc5-afde-f36ca4d94feb\") " pod="openshift-controller-manager/controller-manager-9d97c9b9-wwdhp" Feb 02 17:18:00 crc kubenswrapper[4858]: I0202 17:18:00.958065 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4vjd\" (UniqueName: \"kubernetes.io/projected/bf652e77-7bda-4fc5-afde-f36ca4d94feb-kube-api-access-k4vjd\") pod \"controller-manager-9d97c9b9-wwdhp\" (UID: \"bf652e77-7bda-4fc5-afde-f36ca4d94feb\") " pod="openshift-controller-manager/controller-manager-9d97c9b9-wwdhp" Feb 02 17:18:01 crc kubenswrapper[4858]: I0202 17:18:01.041086 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hdth\" (UniqueName: \"kubernetes.io/projected/b330afef-9be2-4944-b014-0b6b2478316d-kube-api-access-2hdth\") pod \"b330afef-9be2-4944-b014-0b6b2478316d\" (UID: \"b330afef-9be2-4944-b014-0b6b2478316d\") " Feb 02 17:18:01 crc kubenswrapper[4858]: I0202 17:18:01.041157 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b330afef-9be2-4944-b014-0b6b2478316d-client-ca\") pod \"b330afef-9be2-4944-b014-0b6b2478316d\" (UID: \"b330afef-9be2-4944-b014-0b6b2478316d\") " Feb 02 17:18:01 crc kubenswrapper[4858]: I0202 17:18:01.041219 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b330afef-9be2-4944-b014-0b6b2478316d-config\") pod \"b330afef-9be2-4944-b014-0b6b2478316d\" (UID: \"b330afef-9be2-4944-b014-0b6b2478316d\") " Feb 02 17:18:01 crc kubenswrapper[4858]: I0202 17:18:01.041248 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b330afef-9be2-4944-b014-0b6b2478316d-serving-cert\") pod \"b330afef-9be2-4944-b014-0b6b2478316d\" (UID: \"b330afef-9be2-4944-b014-0b6b2478316d\") " Feb 02 17:18:01 crc kubenswrapper[4858]: I0202 17:18:01.042524 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b330afef-9be2-4944-b014-0b6b2478316d-client-ca" (OuterVolumeSpecName: "client-ca") pod "b330afef-9be2-4944-b014-0b6b2478316d" (UID: "b330afef-9be2-4944-b014-0b6b2478316d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:18:01 crc kubenswrapper[4858]: I0202 17:18:01.044644 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b330afef-9be2-4944-b014-0b6b2478316d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b330afef-9be2-4944-b014-0b6b2478316d" (UID: "b330afef-9be2-4944-b014-0b6b2478316d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:18:01 crc kubenswrapper[4858]: I0202 17:18:01.046376 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b330afef-9be2-4944-b014-0b6b2478316d-kube-api-access-2hdth" (OuterVolumeSpecName: "kube-api-access-2hdth") pod "b330afef-9be2-4944-b014-0b6b2478316d" (UID: "b330afef-9be2-4944-b014-0b6b2478316d"). InnerVolumeSpecName "kube-api-access-2hdth". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:18:01 crc kubenswrapper[4858]: I0202 17:18:01.046569 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b330afef-9be2-4944-b014-0b6b2478316d-config" (OuterVolumeSpecName: "config") pod "b330afef-9be2-4944-b014-0b6b2478316d" (UID: "b330afef-9be2-4944-b014-0b6b2478316d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:18:01 crc kubenswrapper[4858]: I0202 17:18:01.073856 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9d97c9b9-wwdhp" Feb 02 17:18:01 crc kubenswrapper[4858]: I0202 17:18:01.118716 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7c9rp"] Feb 02 17:18:01 crc kubenswrapper[4858]: I0202 17:18:01.122080 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7c9rp"] Feb 02 17:18:01 crc kubenswrapper[4858]: I0202 17:18:01.142673 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hdth\" (UniqueName: \"kubernetes.io/projected/b330afef-9be2-4944-b014-0b6b2478316d-kube-api-access-2hdth\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:01 crc kubenswrapper[4858]: I0202 17:18:01.142741 4858 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b330afef-9be2-4944-b014-0b6b2478316d-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:01 crc kubenswrapper[4858]: I0202 17:18:01.142754 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b330afef-9be2-4944-b014-0b6b2478316d-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:01 crc kubenswrapper[4858]: I0202 17:18:01.142765 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b330afef-9be2-4944-b014-0b6b2478316d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:01 crc kubenswrapper[4858]: I0202 17:18:01.331392 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9d97c9b9-wwdhp"] Feb 02 17:18:01 crc kubenswrapper[4858]: W0202 17:18:01.441426 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf652e77_7bda_4fc5_afde_f36ca4d94feb.slice/crio-3574403d5e47d9aaf097e6498ecce27d694d876f836f941e10b6c34f89ca360a WatchSource:0}: Error finding container 3574403d5e47d9aaf097e6498ecce27d694d876f836f941e10b6c34f89ca360a: Status 404 returned error can't find the container with id 3574403d5e47d9aaf097e6498ecce27d694d876f836f941e10b6c34f89ca360a Feb 02 17:18:01 crc kubenswrapper[4858]: I0202 17:18:01.789888 4858 generic.go:334] "Generic (PLEG): container finished" podID="c6a77909-6aaf-4339-84fc-a3121e8d15f3" containerID="b5086601512153ceed395a9960682ffe0d3f29c3b692298248146625bc56c25e" exitCode=0 Feb 02 17:18:01 crc kubenswrapper[4858]: I0202 17:18:01.790206 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cncg2" event={"ID":"c6a77909-6aaf-4339-84fc-a3121e8d15f3","Type":"ContainerDied","Data":"b5086601512153ceed395a9960682ffe0d3f29c3b692298248146625bc56c25e"} Feb 02 17:18:01 crc kubenswrapper[4858]: I0202 17:18:01.793905 4858 generic.go:334] "Generic (PLEG): container finished" podID="9b4f9546-2d15-4925-aba0-40e3b10098a0" containerID="12079498b645c1087925f54f14c6d5688fd2a9c359e4bbc859348ebc245b5df7" exitCode=0 Feb 02 17:18:01 crc kubenswrapper[4858]: I0202 17:18:01.793957 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qtpsf" event={"ID":"9b4f9546-2d15-4925-aba0-40e3b10098a0","Type":"ContainerDied","Data":"12079498b645c1087925f54f14c6d5688fd2a9c359e4bbc859348ebc245b5df7"} Feb 02 17:18:01 crc kubenswrapper[4858]: I0202 17:18:01.803369 4858 generic.go:334] "Generic (PLEG): container finished" podID="a32894ac-052e-4a93-a3d1-79aeec5b8869" containerID="c6a4b0376b8073701c512b55a646248ed5c7e442a4aae731a32fa5fc635c50ab" exitCode=0 Feb 02 17:18:01 crc kubenswrapper[4858]: I0202 17:18:01.803476 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmzkp" event={"ID":"a32894ac-052e-4a93-a3d1-79aeec5b8869","Type":"ContainerDied","Data":"c6a4b0376b8073701c512b55a646248ed5c7e442a4aae731a32fa5fc635c50ab"} Feb 02 17:18:01 crc kubenswrapper[4858]: I0202 17:18:01.808029 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9t6d4" event={"ID":"b330afef-9be2-4944-b014-0b6b2478316d","Type":"ContainerDied","Data":"ca4c59e0b4cb469e73be338c19bc6a8ab21380c015647009e238e54796b6e1be"} Feb 02 17:18:01 crc kubenswrapper[4858]: I0202 17:18:01.808096 4858 scope.go:117] "RemoveContainer" containerID="48ab0488dc16a2a771cc7a688671e5a1fd75e4b017b33daaa9d0bef5570d57cc" Feb 02 17:18:01 crc kubenswrapper[4858]: I0202 17:18:01.808217 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9t6d4" Feb 02 17:18:01 crc kubenswrapper[4858]: I0202 17:18:01.816539 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9d97c9b9-wwdhp" event={"ID":"bf652e77-7bda-4fc5-afde-f36ca4d94feb","Type":"ContainerStarted","Data":"9f85cc9eaa88a0d023aeb72575f95f5f3fae7d7b500c95d63acf1da452edbf88"} Feb 02 17:18:01 crc kubenswrapper[4858]: I0202 17:18:01.816583 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9d97c9b9-wwdhp" event={"ID":"bf652e77-7bda-4fc5-afde-f36ca4d94feb","Type":"ContainerStarted","Data":"3574403d5e47d9aaf097e6498ecce27d694d876f836f941e10b6c34f89ca360a"} Feb 02 17:18:01 crc kubenswrapper[4858]: I0202 17:18:01.817527 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-9d97c9b9-wwdhp" Feb 02 17:18:01 crc kubenswrapper[4858]: I0202 17:18:01.824429 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-9d97c9b9-wwdhp" Feb 02 17:18:01 crc kubenswrapper[4858]: I0202 17:18:01.830822 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c9zvf" event={"ID":"f1040c7c-84e3-41c7-9484-13022fbcef4b","Type":"ContainerStarted","Data":"eba582836dbbd37917c2d38d7a4e786aff64e2a62fc54fdc2347777c88955bc7"} Feb 02 17:18:01 crc kubenswrapper[4858]: E0202 17:18:01.833837 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-vqhch" podUID="2bd40f69-131b-4d0c-87d9-bfae63f9a4eb" Feb 02 17:18:01 crc kubenswrapper[4858]: I0202 17:18:01.935413 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-9d97c9b9-wwdhp" podStartSLOduration=12.935389862 podStartE2EDuration="12.935389862s" podCreationTimestamp="2026-02-02 17:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:18:01.901100988 +0000 UTC m=+183.053516263" watchObservedRunningTime="2026-02-02 17:18:01.935389862 +0000 UTC m=+183.087805127" Feb 02 17:18:01 crc kubenswrapper[4858]: E0202 17:18:01.944386 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb330afef_9be2_4944_b014_0b6b2478316d.slice/crio-ca4c59e0b4cb469e73be338c19bc6a8ab21380c015647009e238e54796b6e1be\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb330afef_9be2_4944_b014_0b6b2478316d.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1040c7c_84e3_41c7_9484_13022fbcef4b.slice/crio-eba582836dbbd37917c2d38d7a4e786aff64e2a62fc54fdc2347777c88955bc7.scope\": RecentStats: unable to find data in memory cache]" Feb 02 17:18:01 crc kubenswrapper[4858]: I0202 17:18:01.968726 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9t6d4"] Feb 02 17:18:01 crc kubenswrapper[4858]: I0202 17:18:01.986268 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9t6d4"] Feb 02 17:18:02 crc kubenswrapper[4858]: I0202 17:18:02.032269 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mj6r9" Feb 02 17:18:02 crc kubenswrapper[4858]: I0202 17:18:02.410229 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b36dd0a-26c9-4d5f-ae02-aa432af223ad" path="/var/lib/kubelet/pods/0b36dd0a-26c9-4d5f-ae02-aa432af223ad/volumes" Feb 02 17:18:02 crc kubenswrapper[4858]: I0202 17:18:02.411525 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b330afef-9be2-4944-b014-0b6b2478316d" path="/var/lib/kubelet/pods/b330afef-9be2-4944-b014-0b6b2478316d/volumes" Feb 02 17:18:02 crc kubenswrapper[4858]: I0202 17:18:02.841188 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qtpsf" event={"ID":"9b4f9546-2d15-4925-aba0-40e3b10098a0","Type":"ContainerStarted","Data":"2d0339f0eb4c03d67e98e4f24394913ebad0d57c561976de80c1ae12dde39132"} Feb 02 17:18:02 crc kubenswrapper[4858]: I0202 17:18:02.845429 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmzkp" event={"ID":"a32894ac-052e-4a93-a3d1-79aeec5b8869","Type":"ContainerStarted","Data":"ede21328d6819dde3686a8a8ef6fe7d48966ad97888d39ea8220955d6f5de202"} Feb 02 17:18:02 crc kubenswrapper[4858]: I0202 17:18:02.850592 4858 generic.go:334] "Generic (PLEG): container finished" podID="f1040c7c-84e3-41c7-9484-13022fbcef4b" containerID="eba582836dbbd37917c2d38d7a4e786aff64e2a62fc54fdc2347777c88955bc7" exitCode=0 Feb 02 17:18:02 crc kubenswrapper[4858]: I0202 17:18:02.850684 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c9zvf" event={"ID":"f1040c7c-84e3-41c7-9484-13022fbcef4b","Type":"ContainerDied","Data":"eba582836dbbd37917c2d38d7a4e786aff64e2a62fc54fdc2347777c88955bc7"} Feb 02 17:18:02 crc kubenswrapper[4858]: I0202 17:18:02.858463 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cncg2" event={"ID":"c6a77909-6aaf-4339-84fc-a3121e8d15f3","Type":"ContainerStarted","Data":"a46c52659e3fd2c72417239c582ffbf4d8328132771dd88cf6d23fd70deabdc6"} Feb 02 17:18:02 crc kubenswrapper[4858]: I0202 17:18:02.865995 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qtpsf" podStartSLOduration=2.81929066 podStartE2EDuration="35.865959494s" podCreationTimestamp="2026-02-02 17:17:27 +0000 UTC" firstStartedPulling="2026-02-02 17:17:29.194709535 +0000 UTC m=+150.347124800" lastFinishedPulling="2026-02-02 17:18:02.241378369 +0000 UTC m=+183.393793634" observedRunningTime="2026-02-02 17:18:02.861152862 +0000 UTC m=+184.013568187" watchObservedRunningTime="2026-02-02 17:18:02.865959494 +0000 UTC m=+184.018374759" Feb 02 17:18:02 crc kubenswrapper[4858]: I0202 17:18:02.877734 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cncg2" podStartSLOduration=2.749403755 podStartE2EDuration="35.877710491s" podCreationTimestamp="2026-02-02 17:17:27 +0000 UTC" firstStartedPulling="2026-02-02 17:17:29.191564112 +0000 UTC m=+150.343979377" lastFinishedPulling="2026-02-02 17:18:02.319870848 +0000 UTC m=+183.472286113" observedRunningTime="2026-02-02 17:18:02.877609518 +0000 UTC m=+184.030024803" watchObservedRunningTime="2026-02-02 17:18:02.877710491 +0000 UTC m=+184.030125776" Feb 02 17:18:02 crc kubenswrapper[4858]: I0202 17:18:02.894846 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vmzkp" podStartSLOduration=2.79253768 podStartE2EDuration="33.894780776s" podCreationTimestamp="2026-02-02 17:17:29 +0000 UTC" firstStartedPulling="2026-02-02 17:17:31.276961617 +0000 UTC m=+152.429376892" lastFinishedPulling="2026-02-02 17:18:02.379204723 +0000 UTC m=+183.531619988" observedRunningTime="2026-02-02 17:18:02.892512109 +0000 UTC m=+184.044927384" watchObservedRunningTime="2026-02-02 17:18:02.894780776 +0000 UTC m=+184.047196041" Feb 02 17:18:03 crc kubenswrapper[4858]: I0202 17:18:03.622651 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75cb47dc7f-zbsg8"] Feb 02 17:18:03 crc kubenswrapper[4858]: E0202 17:18:03.623033 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b330afef-9be2-4944-b014-0b6b2478316d" containerName="route-controller-manager" Feb 02 17:18:03 crc kubenswrapper[4858]: I0202 17:18:03.623049 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b330afef-9be2-4944-b014-0b6b2478316d" containerName="route-controller-manager" Feb 02 17:18:03 crc kubenswrapper[4858]: I0202 17:18:03.623153 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="b330afef-9be2-4944-b014-0b6b2478316d" containerName="route-controller-manager" Feb 02 17:18:03 crc kubenswrapper[4858]: I0202 17:18:03.623481 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75cb47dc7f-zbsg8" Feb 02 17:18:03 crc kubenswrapper[4858]: I0202 17:18:03.624927 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 17:18:03 crc kubenswrapper[4858]: I0202 17:18:03.625812 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 17:18:03 crc kubenswrapper[4858]: I0202 17:18:03.625900 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 17:18:03 crc kubenswrapper[4858]: I0202 17:18:03.625954 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 17:18:03 crc kubenswrapper[4858]: I0202 17:18:03.626104 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 17:18:03 crc kubenswrapper[4858]: I0202 17:18:03.626231 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 17:18:03 crc kubenswrapper[4858]: I0202 17:18:03.635805 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75cb47dc7f-zbsg8"] Feb 02 17:18:03 crc kubenswrapper[4858]: I0202 17:18:03.791952 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlq58\" (UniqueName: \"kubernetes.io/projected/d0678ba1-5e56-42c2-b088-a6e6ab7d59ac-kube-api-access-vlq58\") pod \"route-controller-manager-75cb47dc7f-zbsg8\" (UID: \"d0678ba1-5e56-42c2-b088-a6e6ab7d59ac\") " pod="openshift-route-controller-manager/route-controller-manager-75cb47dc7f-zbsg8" Feb 02 17:18:03 crc kubenswrapper[4858]: I0202 17:18:03.792032 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0678ba1-5e56-42c2-b088-a6e6ab7d59ac-config\") pod \"route-controller-manager-75cb47dc7f-zbsg8\" (UID: \"d0678ba1-5e56-42c2-b088-a6e6ab7d59ac\") " pod="openshift-route-controller-manager/route-controller-manager-75cb47dc7f-zbsg8" Feb 02 17:18:03 crc kubenswrapper[4858]: I0202 17:18:03.792078 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d0678ba1-5e56-42c2-b088-a6e6ab7d59ac-client-ca\") pod \"route-controller-manager-75cb47dc7f-zbsg8\" (UID: \"d0678ba1-5e56-42c2-b088-a6e6ab7d59ac\") " pod="openshift-route-controller-manager/route-controller-manager-75cb47dc7f-zbsg8" Feb 02 17:18:03 crc kubenswrapper[4858]: I0202 17:18:03.792101 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0678ba1-5e56-42c2-b088-a6e6ab7d59ac-serving-cert\") pod \"route-controller-manager-75cb47dc7f-zbsg8\" (UID: \"d0678ba1-5e56-42c2-b088-a6e6ab7d59ac\") " pod="openshift-route-controller-manager/route-controller-manager-75cb47dc7f-zbsg8" Feb 02 17:18:03 crc kubenswrapper[4858]: I0202 17:18:03.865502 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c9zvf" event={"ID":"f1040c7c-84e3-41c7-9484-13022fbcef4b","Type":"ContainerStarted","Data":"96bec0ad2a1fab1d3ec2d56324748f74773965ae489a2735e53b3befd910c831"} Feb 02 17:18:03 crc kubenswrapper[4858]: I0202 17:18:03.879700 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-c9zvf" podStartSLOduration=2.756205232 podStartE2EDuration="33.879678055s" podCreationTimestamp="2026-02-02 17:17:30 +0000 UTC" firstStartedPulling="2026-02-02 17:17:32.400452604 +0000 UTC m=+153.552867869" lastFinishedPulling="2026-02-02 17:18:03.523925427 +0000 UTC m=+184.676340692" observedRunningTime="2026-02-02 17:18:03.879038936 +0000 UTC m=+185.031454201" watchObservedRunningTime="2026-02-02 17:18:03.879678055 +0000 UTC m=+185.032093320" Feb 02 17:18:03 crc kubenswrapper[4858]: I0202 17:18:03.893691 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0678ba1-5e56-42c2-b088-a6e6ab7d59ac-serving-cert\") pod \"route-controller-manager-75cb47dc7f-zbsg8\" (UID: \"d0678ba1-5e56-42c2-b088-a6e6ab7d59ac\") " pod="openshift-route-controller-manager/route-controller-manager-75cb47dc7f-zbsg8" Feb 02 17:18:03 crc kubenswrapper[4858]: I0202 17:18:03.894157 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlq58\" (UniqueName: \"kubernetes.io/projected/d0678ba1-5e56-42c2-b088-a6e6ab7d59ac-kube-api-access-vlq58\") pod \"route-controller-manager-75cb47dc7f-zbsg8\" (UID: \"d0678ba1-5e56-42c2-b088-a6e6ab7d59ac\") " pod="openshift-route-controller-manager/route-controller-manager-75cb47dc7f-zbsg8" Feb 02 17:18:03 crc kubenswrapper[4858]: I0202 17:18:03.894216 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0678ba1-5e56-42c2-b088-a6e6ab7d59ac-config\") pod \"route-controller-manager-75cb47dc7f-zbsg8\" (UID: \"d0678ba1-5e56-42c2-b088-a6e6ab7d59ac\") " pod="openshift-route-controller-manager/route-controller-manager-75cb47dc7f-zbsg8" Feb 02 17:18:03 crc kubenswrapper[4858]: I0202 17:18:03.894288 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d0678ba1-5e56-42c2-b088-a6e6ab7d59ac-client-ca\") pod \"route-controller-manager-75cb47dc7f-zbsg8\" (UID: \"d0678ba1-5e56-42c2-b088-a6e6ab7d59ac\") " pod="openshift-route-controller-manager/route-controller-manager-75cb47dc7f-zbsg8" Feb 02 17:18:03 crc kubenswrapper[4858]: I0202 17:18:03.895110 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d0678ba1-5e56-42c2-b088-a6e6ab7d59ac-client-ca\") pod \"route-controller-manager-75cb47dc7f-zbsg8\" (UID: \"d0678ba1-5e56-42c2-b088-a6e6ab7d59ac\") " pod="openshift-route-controller-manager/route-controller-manager-75cb47dc7f-zbsg8" Feb 02 17:18:03 crc kubenswrapper[4858]: I0202 17:18:03.895562 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0678ba1-5e56-42c2-b088-a6e6ab7d59ac-config\") pod \"route-controller-manager-75cb47dc7f-zbsg8\" (UID: \"d0678ba1-5e56-42c2-b088-a6e6ab7d59ac\") " pod="openshift-route-controller-manager/route-controller-manager-75cb47dc7f-zbsg8" Feb 02 17:18:03 crc kubenswrapper[4858]: I0202 17:18:03.914890 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0678ba1-5e56-42c2-b088-a6e6ab7d59ac-serving-cert\") pod \"route-controller-manager-75cb47dc7f-zbsg8\" (UID: \"d0678ba1-5e56-42c2-b088-a6e6ab7d59ac\") " pod="openshift-route-controller-manager/route-controller-manager-75cb47dc7f-zbsg8" Feb 02 17:18:03 crc kubenswrapper[4858]: I0202 17:18:03.915043 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlq58\" (UniqueName: \"kubernetes.io/projected/d0678ba1-5e56-42c2-b088-a6e6ab7d59ac-kube-api-access-vlq58\") pod \"route-controller-manager-75cb47dc7f-zbsg8\" (UID: \"d0678ba1-5e56-42c2-b088-a6e6ab7d59ac\") " pod="openshift-route-controller-manager/route-controller-manager-75cb47dc7f-zbsg8" Feb 02 17:18:03 crc kubenswrapper[4858]: I0202 17:18:03.936515 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75cb47dc7f-zbsg8" Feb 02 17:18:04 crc kubenswrapper[4858]: I0202 17:18:04.123679 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75cb47dc7f-zbsg8"] Feb 02 17:18:04 crc kubenswrapper[4858]: I0202 17:18:04.871696 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75cb47dc7f-zbsg8" event={"ID":"d0678ba1-5e56-42c2-b088-a6e6ab7d59ac","Type":"ContainerStarted","Data":"f76c9567a970d38e4f9c1050e7b43aa2d62425bd78eb7b396924917ee917b9cf"} Feb 02 17:18:04 crc kubenswrapper[4858]: I0202 17:18:04.872290 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75cb47dc7f-zbsg8" event={"ID":"d0678ba1-5e56-42c2-b088-a6e6ab7d59ac","Type":"ContainerStarted","Data":"c335b521a6482bf4b72eb11c87aac928e916e319731f9e28c63893146ec2f85e"} Feb 02 17:18:04 crc kubenswrapper[4858]: I0202 17:18:04.872324 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-75cb47dc7f-zbsg8" Feb 02 17:18:04 crc kubenswrapper[4858]: I0202 17:18:04.894434 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-75cb47dc7f-zbsg8" podStartSLOduration=15.894412366 podStartE2EDuration="15.894412366s" podCreationTimestamp="2026-02-02 17:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:18:04.89352554 +0000 UTC m=+186.045940805" watchObservedRunningTime="2026-02-02 17:18:04.894412366 +0000 UTC m=+186.046827631" Feb 02 17:18:04 crc kubenswrapper[4858]: I0202 17:18:04.912530 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-75cb47dc7f-zbsg8" Feb 02 17:18:07 crc kubenswrapper[4858]: I0202 17:18:07.721382 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qtpsf" Feb 02 17:18:07 crc kubenswrapper[4858]: I0202 17:18:07.721845 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qtpsf" Feb 02 17:18:07 crc kubenswrapper[4858]: I0202 17:18:07.953608 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qtpsf" Feb 02 17:18:07 crc kubenswrapper[4858]: I0202 17:18:07.994347 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qtpsf" Feb 02 17:18:08 crc kubenswrapper[4858]: I0202 17:18:08.130546 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cncg2" Feb 02 17:18:08 crc kubenswrapper[4858]: I0202 17:18:08.130588 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cncg2" Feb 02 17:18:08 crc kubenswrapper[4858]: I0202 17:18:08.173131 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cncg2" Feb 02 17:18:08 crc kubenswrapper[4858]: I0202 17:18:08.466717 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 17:18:08 crc kubenswrapper[4858]: I0202 17:18:08.838284 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-9d97c9b9-wwdhp"] Feb 02 17:18:08 crc kubenswrapper[4858]: I0202 17:18:08.838585 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-9d97c9b9-wwdhp" podUID="bf652e77-7bda-4fc5-afde-f36ca4d94feb" containerName="controller-manager" containerID="cri-o://9f85cc9eaa88a0d023aeb72575f95f5f3fae7d7b500c95d63acf1da452edbf88" gracePeriod=30 Feb 02 17:18:08 crc kubenswrapper[4858]: I0202 17:18:08.932859 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75cb47dc7f-zbsg8"] Feb 02 17:18:08 crc kubenswrapper[4858]: I0202 17:18:08.933346 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-75cb47dc7f-zbsg8" podUID="d0678ba1-5e56-42c2-b088-a6e6ab7d59ac" containerName="route-controller-manager" containerID="cri-o://f76c9567a970d38e4f9c1050e7b43aa2d62425bd78eb7b396924917ee917b9cf" gracePeriod=30 Feb 02 17:18:08 crc kubenswrapper[4858]: I0202 17:18:08.961219 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cncg2" Feb 02 17:18:09 crc kubenswrapper[4858]: I0202 17:18:09.333951 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75cb47dc7f-zbsg8" Feb 02 17:18:09 crc kubenswrapper[4858]: I0202 17:18:09.366323 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlq58\" (UniqueName: \"kubernetes.io/projected/d0678ba1-5e56-42c2-b088-a6e6ab7d59ac-kube-api-access-vlq58\") pod \"d0678ba1-5e56-42c2-b088-a6e6ab7d59ac\" (UID: \"d0678ba1-5e56-42c2-b088-a6e6ab7d59ac\") " Feb 02 17:18:09 crc kubenswrapper[4858]: I0202 17:18:09.366366 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d0678ba1-5e56-42c2-b088-a6e6ab7d59ac-client-ca\") pod \"d0678ba1-5e56-42c2-b088-a6e6ab7d59ac\" (UID: \"d0678ba1-5e56-42c2-b088-a6e6ab7d59ac\") " Feb 02 17:18:09 crc kubenswrapper[4858]: I0202 17:18:09.366409 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0678ba1-5e56-42c2-b088-a6e6ab7d59ac-config\") pod \"d0678ba1-5e56-42c2-b088-a6e6ab7d59ac\" (UID: \"d0678ba1-5e56-42c2-b088-a6e6ab7d59ac\") " Feb 02 17:18:09 crc kubenswrapper[4858]: I0202 17:18:09.366445 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0678ba1-5e56-42c2-b088-a6e6ab7d59ac-serving-cert\") pod \"d0678ba1-5e56-42c2-b088-a6e6ab7d59ac\" (UID: \"d0678ba1-5e56-42c2-b088-a6e6ab7d59ac\") " Feb 02 17:18:09 crc kubenswrapper[4858]: I0202 17:18:09.367950 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0678ba1-5e56-42c2-b088-a6e6ab7d59ac-config" (OuterVolumeSpecName: "config") pod "d0678ba1-5e56-42c2-b088-a6e6ab7d59ac" (UID: "d0678ba1-5e56-42c2-b088-a6e6ab7d59ac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:18:09 crc kubenswrapper[4858]: I0202 17:18:09.368571 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0678ba1-5e56-42c2-b088-a6e6ab7d59ac-client-ca" (OuterVolumeSpecName: "client-ca") pod "d0678ba1-5e56-42c2-b088-a6e6ab7d59ac" (UID: "d0678ba1-5e56-42c2-b088-a6e6ab7d59ac"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:18:09 crc kubenswrapper[4858]: I0202 17:18:09.372442 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0678ba1-5e56-42c2-b088-a6e6ab7d59ac-kube-api-access-vlq58" (OuterVolumeSpecName: "kube-api-access-vlq58") pod "d0678ba1-5e56-42c2-b088-a6e6ab7d59ac" (UID: "d0678ba1-5e56-42c2-b088-a6e6ab7d59ac"). InnerVolumeSpecName "kube-api-access-vlq58". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:18:09 crc kubenswrapper[4858]: I0202 17:18:09.372515 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0678ba1-5e56-42c2-b088-a6e6ab7d59ac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d0678ba1-5e56-42c2-b088-a6e6ab7d59ac" (UID: "d0678ba1-5e56-42c2-b088-a6e6ab7d59ac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:18:09 crc kubenswrapper[4858]: I0202 17:18:09.389687 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9d97c9b9-wwdhp" Feb 02 17:18:09 crc kubenswrapper[4858]: I0202 17:18:09.467581 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf652e77-7bda-4fc5-afde-f36ca4d94feb-serving-cert\") pod \"bf652e77-7bda-4fc5-afde-f36ca4d94feb\" (UID: \"bf652e77-7bda-4fc5-afde-f36ca4d94feb\") " Feb 02 17:18:09 crc kubenswrapper[4858]: I0202 17:18:09.467657 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bf652e77-7bda-4fc5-afde-f36ca4d94feb-proxy-ca-bundles\") pod \"bf652e77-7bda-4fc5-afde-f36ca4d94feb\" (UID: \"bf652e77-7bda-4fc5-afde-f36ca4d94feb\") " Feb 02 17:18:09 crc kubenswrapper[4858]: I0202 17:18:09.467700 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4vjd\" (UniqueName: \"kubernetes.io/projected/bf652e77-7bda-4fc5-afde-f36ca4d94feb-kube-api-access-k4vjd\") pod \"bf652e77-7bda-4fc5-afde-f36ca4d94feb\" (UID: \"bf652e77-7bda-4fc5-afde-f36ca4d94feb\") " Feb 02 17:18:09 crc kubenswrapper[4858]: I0202 17:18:09.467726 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf652e77-7bda-4fc5-afde-f36ca4d94feb-config\") pod \"bf652e77-7bda-4fc5-afde-f36ca4d94feb\" (UID: \"bf652e77-7bda-4fc5-afde-f36ca4d94feb\") " Feb 02 17:18:09 crc kubenswrapper[4858]: I0202 17:18:09.467772 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf652e77-7bda-4fc5-afde-f36ca4d94feb-client-ca\") pod \"bf652e77-7bda-4fc5-afde-f36ca4d94feb\" (UID: \"bf652e77-7bda-4fc5-afde-f36ca4d94feb\") " Feb 02 17:18:09 crc kubenswrapper[4858]: I0202 17:18:09.467928 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlq58\" (UniqueName: \"kubernetes.io/projected/d0678ba1-5e56-42c2-b088-a6e6ab7d59ac-kube-api-access-vlq58\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:09 crc kubenswrapper[4858]: I0202 17:18:09.467943 4858 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d0678ba1-5e56-42c2-b088-a6e6ab7d59ac-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:09 crc kubenswrapper[4858]: I0202 17:18:09.467955 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0678ba1-5e56-42c2-b088-a6e6ab7d59ac-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:09 crc kubenswrapper[4858]: I0202 17:18:09.467966 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0678ba1-5e56-42c2-b088-a6e6ab7d59ac-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:09 crc kubenswrapper[4858]: I0202 17:18:09.468851 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf652e77-7bda-4fc5-afde-f36ca4d94feb-client-ca" (OuterVolumeSpecName: "client-ca") pod "bf652e77-7bda-4fc5-afde-f36ca4d94feb" (UID: "bf652e77-7bda-4fc5-afde-f36ca4d94feb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:18:09 crc kubenswrapper[4858]: I0202 17:18:09.471007 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf652e77-7bda-4fc5-afde-f36ca4d94feb-config" (OuterVolumeSpecName: "config") pod "bf652e77-7bda-4fc5-afde-f36ca4d94feb" (UID: "bf652e77-7bda-4fc5-afde-f36ca4d94feb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:18:09 crc kubenswrapper[4858]: I0202 17:18:09.470003 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf652e77-7bda-4fc5-afde-f36ca4d94feb-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "bf652e77-7bda-4fc5-afde-f36ca4d94feb" (UID: "bf652e77-7bda-4fc5-afde-f36ca4d94feb"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:18:09 crc kubenswrapper[4858]: I0202 17:18:09.472576 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf652e77-7bda-4fc5-afde-f36ca4d94feb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bf652e77-7bda-4fc5-afde-f36ca4d94feb" (UID: "bf652e77-7bda-4fc5-afde-f36ca4d94feb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:18:09 crc kubenswrapper[4858]: I0202 17:18:09.472673 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf652e77-7bda-4fc5-afde-f36ca4d94feb-kube-api-access-k4vjd" (OuterVolumeSpecName: "kube-api-access-k4vjd") pod "bf652e77-7bda-4fc5-afde-f36ca4d94feb" (UID: "bf652e77-7bda-4fc5-afde-f36ca4d94feb"). InnerVolumeSpecName "kube-api-access-k4vjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:18:09 crc kubenswrapper[4858]: I0202 17:18:09.568729 4858 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bf652e77-7bda-4fc5-afde-f36ca4d94feb-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:09 crc kubenswrapper[4858]: I0202 17:18:09.568765 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4vjd\" (UniqueName: \"kubernetes.io/projected/bf652e77-7bda-4fc5-afde-f36ca4d94feb-kube-api-access-k4vjd\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:09 crc kubenswrapper[4858]: I0202 17:18:09.568778 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf652e77-7bda-4fc5-afde-f36ca4d94feb-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:09 crc kubenswrapper[4858]: I0202 17:18:09.568789 4858 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf652e77-7bda-4fc5-afde-f36ca4d94feb-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:09 crc kubenswrapper[4858]: I0202 17:18:09.568801 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf652e77-7bda-4fc5-afde-f36ca4d94feb-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:09 crc kubenswrapper[4858]: I0202 17:18:09.904078 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vmzkp" Feb 02 17:18:09 crc kubenswrapper[4858]: I0202 17:18:09.905069 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vmzkp" Feb 02 17:18:09 crc kubenswrapper[4858]: I0202 17:18:09.916742 4858 generic.go:334] "Generic (PLEG): container finished" podID="d0678ba1-5e56-42c2-b088-a6e6ab7d59ac" containerID="f76c9567a970d38e4f9c1050e7b43aa2d62425bd78eb7b396924917ee917b9cf" exitCode=0 Feb 02 17:18:09 crc kubenswrapper[4858]: I0202 17:18:09.916916 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75cb47dc7f-zbsg8" Feb 02 17:18:09 crc kubenswrapper[4858]: I0202 17:18:09.919399 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75cb47dc7f-zbsg8" event={"ID":"d0678ba1-5e56-42c2-b088-a6e6ab7d59ac","Type":"ContainerDied","Data":"f76c9567a970d38e4f9c1050e7b43aa2d62425bd78eb7b396924917ee917b9cf"} Feb 02 17:18:09 crc kubenswrapper[4858]: I0202 17:18:09.919662 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75cb47dc7f-zbsg8" event={"ID":"d0678ba1-5e56-42c2-b088-a6e6ab7d59ac","Type":"ContainerDied","Data":"c335b521a6482bf4b72eb11c87aac928e916e319731f9e28c63893146ec2f85e"} Feb 02 17:18:09 crc kubenswrapper[4858]: I0202 17:18:09.919744 4858 scope.go:117] "RemoveContainer" containerID="f76c9567a970d38e4f9c1050e7b43aa2d62425bd78eb7b396924917ee917b9cf" Feb 02 17:18:09 crc kubenswrapper[4858]: I0202 17:18:09.922942 4858 generic.go:334] "Generic (PLEG): container finished" podID="bf652e77-7bda-4fc5-afde-f36ca4d94feb" containerID="9f85cc9eaa88a0d023aeb72575f95f5f3fae7d7b500c95d63acf1da452edbf88" exitCode=0 Feb 02 17:18:09 crc kubenswrapper[4858]: I0202 17:18:09.923090 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9d97c9b9-wwdhp" Feb 02 17:18:09 crc kubenswrapper[4858]: I0202 17:18:09.923130 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9d97c9b9-wwdhp" event={"ID":"bf652e77-7bda-4fc5-afde-f36ca4d94feb","Type":"ContainerDied","Data":"9f85cc9eaa88a0d023aeb72575f95f5f3fae7d7b500c95d63acf1da452edbf88"} Feb 02 17:18:09 crc kubenswrapper[4858]: I0202 17:18:09.923166 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9d97c9b9-wwdhp" event={"ID":"bf652e77-7bda-4fc5-afde-f36ca4d94feb","Type":"ContainerDied","Data":"3574403d5e47d9aaf097e6498ecce27d694d876f836f941e10b6c34f89ca360a"} Feb 02 17:18:09 crc kubenswrapper[4858]: I0202 17:18:09.986152 4858 scope.go:117] "RemoveContainer" containerID="f76c9567a970d38e4f9c1050e7b43aa2d62425bd78eb7b396924917ee917b9cf" Feb 02 17:18:09 crc kubenswrapper[4858]: E0202 17:18:09.990149 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f76c9567a970d38e4f9c1050e7b43aa2d62425bd78eb7b396924917ee917b9cf\": container with ID starting with f76c9567a970d38e4f9c1050e7b43aa2d62425bd78eb7b396924917ee917b9cf not found: ID does not exist" containerID="f76c9567a970d38e4f9c1050e7b43aa2d62425bd78eb7b396924917ee917b9cf" Feb 02 17:18:09 crc kubenswrapper[4858]: I0202 17:18:09.990204 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f76c9567a970d38e4f9c1050e7b43aa2d62425bd78eb7b396924917ee917b9cf"} err="failed to get container status \"f76c9567a970d38e4f9c1050e7b43aa2d62425bd78eb7b396924917ee917b9cf\": rpc error: code = NotFound desc = could not find container \"f76c9567a970d38e4f9c1050e7b43aa2d62425bd78eb7b396924917ee917b9cf\": container with ID starting with f76c9567a970d38e4f9c1050e7b43aa2d62425bd78eb7b396924917ee917b9cf not found: ID does not exist" Feb 02 17:18:09 crc kubenswrapper[4858]: I0202 17:18:09.990267 4858 scope.go:117] "RemoveContainer" containerID="9f85cc9eaa88a0d023aeb72575f95f5f3fae7d7b500c95d63acf1da452edbf88" Feb 02 17:18:09 crc kubenswrapper[4858]: I0202 17:18:09.994719 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9n5ph"] Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.009128 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cncg2"] Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.009378 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vmzkp" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.088616 4858 scope.go:117] "RemoveContainer" containerID="9f85cc9eaa88a0d023aeb72575f95f5f3fae7d7b500c95d63acf1da452edbf88" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.091885 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75cb47dc7f-zbsg8"] Feb 02 17:18:10 crc kubenswrapper[4858]: E0202 17:18:10.092195 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f85cc9eaa88a0d023aeb72575f95f5f3fae7d7b500c95d63acf1da452edbf88\": container with ID starting with 9f85cc9eaa88a0d023aeb72575f95f5f3fae7d7b500c95d63acf1da452edbf88 not found: ID does not exist" containerID="9f85cc9eaa88a0d023aeb72575f95f5f3fae7d7b500c95d63acf1da452edbf88" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.092238 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f85cc9eaa88a0d023aeb72575f95f5f3fae7d7b500c95d63acf1da452edbf88"} err="failed to get container status \"9f85cc9eaa88a0d023aeb72575f95f5f3fae7d7b500c95d63acf1da452edbf88\": rpc error: code = NotFound desc = could not find container \"9f85cc9eaa88a0d023aeb72575f95f5f3fae7d7b500c95d63acf1da452edbf88\": container with ID starting with 9f85cc9eaa88a0d023aeb72575f95f5f3fae7d7b500c95d63acf1da452edbf88 not found: ID does not exist" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.099386 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75cb47dc7f-zbsg8"] Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.105751 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vmzkp" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.115002 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-9d97c9b9-wwdhp"] Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.118547 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-9d97c9b9-wwdhp"] Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.409242 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf652e77-7bda-4fc5-afde-f36ca4d94feb" path="/var/lib/kubelet/pods/bf652e77-7bda-4fc5-afde-f36ca4d94feb/volumes" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.409934 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0678ba1-5e56-42c2-b088-a6e6ab7d59ac" path="/var/lib/kubelet/pods/d0678ba1-5e56-42c2-b088-a6e6ab7d59ac/volumes" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.627599 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-68f6ccd7-fxpx6"] Feb 02 17:18:10 crc kubenswrapper[4858]: E0202 17:18:10.627871 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0678ba1-5e56-42c2-b088-a6e6ab7d59ac" containerName="route-controller-manager" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.627889 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0678ba1-5e56-42c2-b088-a6e6ab7d59ac" containerName="route-controller-manager" Feb 02 17:18:10 crc kubenswrapper[4858]: E0202 17:18:10.627905 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf652e77-7bda-4fc5-afde-f36ca4d94feb" containerName="controller-manager" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.627911 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf652e77-7bda-4fc5-afde-f36ca4d94feb" containerName="controller-manager" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.628017 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf652e77-7bda-4fc5-afde-f36ca4d94feb" containerName="controller-manager" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.628032 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0678ba1-5e56-42c2-b088-a6e6ab7d59ac" containerName="route-controller-manager" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.628448 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68f6ccd7-fxpx6" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.631759 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fd756b9df-jz9fc"] Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.632571 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fd756b9df-jz9fc" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.635180 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.635213 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.638077 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.639012 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.639551 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.639703 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.639849 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.640176 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.640540 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.640748 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.640748 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.640750 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.646716 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.651542 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-68f6ccd7-fxpx6"] Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.658874 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fd756b9df-jz9fc"] Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.685550 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpgq6\" (UniqueName: \"kubernetes.io/projected/68b434d7-79a1-4449-ba3e-2615931cc0b4-kube-api-access-bpgq6\") pod \"controller-manager-68f6ccd7-fxpx6\" (UID: \"68b434d7-79a1-4449-ba3e-2615931cc0b4\") " pod="openshift-controller-manager/controller-manager-68f6ccd7-fxpx6" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.685603 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68b434d7-79a1-4449-ba3e-2615931cc0b4-serving-cert\") pod \"controller-manager-68f6ccd7-fxpx6\" (UID: \"68b434d7-79a1-4449-ba3e-2615931cc0b4\") " pod="openshift-controller-manager/controller-manager-68f6ccd7-fxpx6" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.685629 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rhr8\" (UniqueName: \"kubernetes.io/projected/fca78be5-9e02-48f5-b575-bf47734fee5b-kube-api-access-5rhr8\") pod \"route-controller-manager-fd756b9df-jz9fc\" (UID: \"fca78be5-9e02-48f5-b575-bf47734fee5b\") " pod="openshift-route-controller-manager/route-controller-manager-fd756b9df-jz9fc" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.685647 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/68b434d7-79a1-4449-ba3e-2615931cc0b4-client-ca\") pod \"controller-manager-68f6ccd7-fxpx6\" (UID: \"68b434d7-79a1-4449-ba3e-2615931cc0b4\") " pod="openshift-controller-manager/controller-manager-68f6ccd7-fxpx6" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.685673 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fca78be5-9e02-48f5-b575-bf47734fee5b-serving-cert\") pod \"route-controller-manager-fd756b9df-jz9fc\" (UID: \"fca78be5-9e02-48f5-b575-bf47734fee5b\") " pod="openshift-route-controller-manager/route-controller-manager-fd756b9df-jz9fc" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.685689 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fca78be5-9e02-48f5-b575-bf47734fee5b-config\") pod \"route-controller-manager-fd756b9df-jz9fc\" (UID: \"fca78be5-9e02-48f5-b575-bf47734fee5b\") " pod="openshift-route-controller-manager/route-controller-manager-fd756b9df-jz9fc" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.685707 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68b434d7-79a1-4449-ba3e-2615931cc0b4-config\") pod \"controller-manager-68f6ccd7-fxpx6\" (UID: \"68b434d7-79a1-4449-ba3e-2615931cc0b4\") " pod="openshift-controller-manager/controller-manager-68f6ccd7-fxpx6" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.685776 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/68b434d7-79a1-4449-ba3e-2615931cc0b4-proxy-ca-bundles\") pod \"controller-manager-68f6ccd7-fxpx6\" (UID: \"68b434d7-79a1-4449-ba3e-2615931cc0b4\") " pod="openshift-controller-manager/controller-manager-68f6ccd7-fxpx6" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.685811 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fca78be5-9e02-48f5-b575-bf47734fee5b-client-ca\") pod \"route-controller-manager-fd756b9df-jz9fc\" (UID: \"fca78be5-9e02-48f5-b575-bf47734fee5b\") " pod="openshift-route-controller-manager/route-controller-manager-fd756b9df-jz9fc" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.786735 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/68b434d7-79a1-4449-ba3e-2615931cc0b4-proxy-ca-bundles\") pod \"controller-manager-68f6ccd7-fxpx6\" (UID: \"68b434d7-79a1-4449-ba3e-2615931cc0b4\") " pod="openshift-controller-manager/controller-manager-68f6ccd7-fxpx6" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.786784 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fca78be5-9e02-48f5-b575-bf47734fee5b-client-ca\") pod \"route-controller-manager-fd756b9df-jz9fc\" (UID: \"fca78be5-9e02-48f5-b575-bf47734fee5b\") " pod="openshift-route-controller-manager/route-controller-manager-fd756b9df-jz9fc" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.786805 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpgq6\" (UniqueName: \"kubernetes.io/projected/68b434d7-79a1-4449-ba3e-2615931cc0b4-kube-api-access-bpgq6\") pod \"controller-manager-68f6ccd7-fxpx6\" (UID: \"68b434d7-79a1-4449-ba3e-2615931cc0b4\") " pod="openshift-controller-manager/controller-manager-68f6ccd7-fxpx6" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.786832 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68b434d7-79a1-4449-ba3e-2615931cc0b4-serving-cert\") pod \"controller-manager-68f6ccd7-fxpx6\" (UID: \"68b434d7-79a1-4449-ba3e-2615931cc0b4\") " pod="openshift-controller-manager/controller-manager-68f6ccd7-fxpx6" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.786857 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rhr8\" (UniqueName: \"kubernetes.io/projected/fca78be5-9e02-48f5-b575-bf47734fee5b-kube-api-access-5rhr8\") pod \"route-controller-manager-fd756b9df-jz9fc\" (UID: \"fca78be5-9e02-48f5-b575-bf47734fee5b\") " pod="openshift-route-controller-manager/route-controller-manager-fd756b9df-jz9fc" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.786874 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/68b434d7-79a1-4449-ba3e-2615931cc0b4-client-ca\") pod \"controller-manager-68f6ccd7-fxpx6\" (UID: \"68b434d7-79a1-4449-ba3e-2615931cc0b4\") " pod="openshift-controller-manager/controller-manager-68f6ccd7-fxpx6" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.786891 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fca78be5-9e02-48f5-b575-bf47734fee5b-config\") pod \"route-controller-manager-fd756b9df-jz9fc\" (UID: \"fca78be5-9e02-48f5-b575-bf47734fee5b\") " pod="openshift-route-controller-manager/route-controller-manager-fd756b9df-jz9fc" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.786906 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fca78be5-9e02-48f5-b575-bf47734fee5b-serving-cert\") pod \"route-controller-manager-fd756b9df-jz9fc\" (UID: \"fca78be5-9e02-48f5-b575-bf47734fee5b\") " pod="openshift-route-controller-manager/route-controller-manager-fd756b9df-jz9fc" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.786928 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68b434d7-79a1-4449-ba3e-2615931cc0b4-config\") pod \"controller-manager-68f6ccd7-fxpx6\" (UID: \"68b434d7-79a1-4449-ba3e-2615931cc0b4\") " pod="openshift-controller-manager/controller-manager-68f6ccd7-fxpx6" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.787956 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/68b434d7-79a1-4449-ba3e-2615931cc0b4-client-ca\") pod \"controller-manager-68f6ccd7-fxpx6\" (UID: \"68b434d7-79a1-4449-ba3e-2615931cc0b4\") " pod="openshift-controller-manager/controller-manager-68f6ccd7-fxpx6" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.788097 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/68b434d7-79a1-4449-ba3e-2615931cc0b4-proxy-ca-bundles\") pod \"controller-manager-68f6ccd7-fxpx6\" (UID: \"68b434d7-79a1-4449-ba3e-2615931cc0b4\") " pod="openshift-controller-manager/controller-manager-68f6ccd7-fxpx6" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.788223 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fca78be5-9e02-48f5-b575-bf47734fee5b-client-ca\") pod \"route-controller-manager-fd756b9df-jz9fc\" (UID: \"fca78be5-9e02-48f5-b575-bf47734fee5b\") " pod="openshift-route-controller-manager/route-controller-manager-fd756b9df-jz9fc" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.788347 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68b434d7-79a1-4449-ba3e-2615931cc0b4-config\") pod \"controller-manager-68f6ccd7-fxpx6\" (UID: \"68b434d7-79a1-4449-ba3e-2615931cc0b4\") " pod="openshift-controller-manager/controller-manager-68f6ccd7-fxpx6" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.792563 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fca78be5-9e02-48f5-b575-bf47734fee5b-serving-cert\") pod \"route-controller-manager-fd756b9df-jz9fc\" (UID: \"fca78be5-9e02-48f5-b575-bf47734fee5b\") " pod="openshift-route-controller-manager/route-controller-manager-fd756b9df-jz9fc" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.792595 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68b434d7-79a1-4449-ba3e-2615931cc0b4-serving-cert\") pod \"controller-manager-68f6ccd7-fxpx6\" (UID: \"68b434d7-79a1-4449-ba3e-2615931cc0b4\") " pod="openshift-controller-manager/controller-manager-68f6ccd7-fxpx6" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.795777 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fca78be5-9e02-48f5-b575-bf47734fee5b-config\") pod \"route-controller-manager-fd756b9df-jz9fc\" (UID: \"fca78be5-9e02-48f5-b575-bf47734fee5b\") " pod="openshift-route-controller-manager/route-controller-manager-fd756b9df-jz9fc" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.802662 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rhr8\" (UniqueName: \"kubernetes.io/projected/fca78be5-9e02-48f5-b575-bf47734fee5b-kube-api-access-5rhr8\") pod \"route-controller-manager-fd756b9df-jz9fc\" (UID: \"fca78be5-9e02-48f5-b575-bf47734fee5b\") " pod="openshift-route-controller-manager/route-controller-manager-fd756b9df-jz9fc" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.803580 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpgq6\" (UniqueName: \"kubernetes.io/projected/68b434d7-79a1-4449-ba3e-2615931cc0b4-kube-api-access-bpgq6\") pod \"controller-manager-68f6ccd7-fxpx6\" (UID: \"68b434d7-79a1-4449-ba3e-2615931cc0b4\") " pod="openshift-controller-manager/controller-manager-68f6ccd7-fxpx6" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.931329 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cncg2" podUID="c6a77909-6aaf-4339-84fc-a3121e8d15f3" containerName="registry-server" containerID="cri-o://a46c52659e3fd2c72417239c582ffbf4d8328132771dd88cf6d23fd70deabdc6" gracePeriod=2 Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.946616 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68f6ccd7-fxpx6" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.955989 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-c9zvf" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.956046 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-c9zvf" Feb 02 17:18:10 crc kubenswrapper[4858]: I0202 17:18:10.960255 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fd756b9df-jz9fc" Feb 02 17:18:11 crc kubenswrapper[4858]: I0202 17:18:11.003594 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-c9zvf" Feb 02 17:18:11 crc kubenswrapper[4858]: I0202 17:18:11.293291 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-68f6ccd7-fxpx6"] Feb 02 17:18:11 crc kubenswrapper[4858]: W0202 17:18:11.304953 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68b434d7_79a1_4449_ba3e_2615931cc0b4.slice/crio-d2fbd6e2c06251f8b22402617a724358f2ecd519654890a8097f7b623f8032e9 WatchSource:0}: Error finding container d2fbd6e2c06251f8b22402617a724358f2ecd519654890a8097f7b623f8032e9: Status 404 returned error can't find the container with id d2fbd6e2c06251f8b22402617a724358f2ecd519654890a8097f7b623f8032e9 Feb 02 17:18:11 crc kubenswrapper[4858]: I0202 17:18:11.372876 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fd756b9df-jz9fc"] Feb 02 17:18:11 crc kubenswrapper[4858]: W0202 17:18:11.422508 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfca78be5_9e02_48f5_b575_bf47734fee5b.slice/crio-2fa916f3fa64ece9b9c44fa2e5b1e26d1c95532d7489c80fdebfc43b10b683be WatchSource:0}: Error finding container 2fa916f3fa64ece9b9c44fa2e5b1e26d1c95532d7489c80fdebfc43b10b683be: Status 404 returned error can't find the container with id 2fa916f3fa64ece9b9c44fa2e5b1e26d1c95532d7489c80fdebfc43b10b683be Feb 02 17:18:11 crc kubenswrapper[4858]: I0202 17:18:11.443518 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cncg2" Feb 02 17:18:11 crc kubenswrapper[4858]: I0202 17:18:11.594387 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6a77909-6aaf-4339-84fc-a3121e8d15f3-utilities\") pod \"c6a77909-6aaf-4339-84fc-a3121e8d15f3\" (UID: \"c6a77909-6aaf-4339-84fc-a3121e8d15f3\") " Feb 02 17:18:11 crc kubenswrapper[4858]: I0202 17:18:11.594519 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6a77909-6aaf-4339-84fc-a3121e8d15f3-catalog-content\") pod \"c6a77909-6aaf-4339-84fc-a3121e8d15f3\" (UID: \"c6a77909-6aaf-4339-84fc-a3121e8d15f3\") " Feb 02 17:18:11 crc kubenswrapper[4858]: I0202 17:18:11.594550 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p86ft\" (UniqueName: \"kubernetes.io/projected/c6a77909-6aaf-4339-84fc-a3121e8d15f3-kube-api-access-p86ft\") pod \"c6a77909-6aaf-4339-84fc-a3121e8d15f3\" (UID: \"c6a77909-6aaf-4339-84fc-a3121e8d15f3\") " Feb 02 17:18:11 crc kubenswrapper[4858]: I0202 17:18:11.595852 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6a77909-6aaf-4339-84fc-a3121e8d15f3-utilities" (OuterVolumeSpecName: "utilities") pod "c6a77909-6aaf-4339-84fc-a3121e8d15f3" (UID: "c6a77909-6aaf-4339-84fc-a3121e8d15f3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:18:11 crc kubenswrapper[4858]: I0202 17:18:11.600262 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6a77909-6aaf-4339-84fc-a3121e8d15f3-kube-api-access-p86ft" (OuterVolumeSpecName: "kube-api-access-p86ft") pod "c6a77909-6aaf-4339-84fc-a3121e8d15f3" (UID: "c6a77909-6aaf-4339-84fc-a3121e8d15f3"). InnerVolumeSpecName "kube-api-access-p86ft". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:18:11 crc kubenswrapper[4858]: I0202 17:18:11.644804 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6a77909-6aaf-4339-84fc-a3121e8d15f3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c6a77909-6aaf-4339-84fc-a3121e8d15f3" (UID: "c6a77909-6aaf-4339-84fc-a3121e8d15f3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:18:11 crc kubenswrapper[4858]: I0202 17:18:11.695911 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6a77909-6aaf-4339-84fc-a3121e8d15f3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:11 crc kubenswrapper[4858]: I0202 17:18:11.695953 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p86ft\" (UniqueName: \"kubernetes.io/projected/c6a77909-6aaf-4339-84fc-a3121e8d15f3-kube-api-access-p86ft\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:11 crc kubenswrapper[4858]: I0202 17:18:11.695983 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6a77909-6aaf-4339-84fc-a3121e8d15f3-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:11 crc kubenswrapper[4858]: I0202 17:18:11.939709 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68f6ccd7-fxpx6" event={"ID":"68b434d7-79a1-4449-ba3e-2615931cc0b4","Type":"ContainerStarted","Data":"4511165cef0047e1a49420ac7d57fcaa28413f2b2d9c766603d0153425a6d7a4"} Feb 02 17:18:11 crc kubenswrapper[4858]: I0202 17:18:11.939770 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68f6ccd7-fxpx6" event={"ID":"68b434d7-79a1-4449-ba3e-2615931cc0b4","Type":"ContainerStarted","Data":"d2fbd6e2c06251f8b22402617a724358f2ecd519654890a8097f7b623f8032e9"} Feb 02 17:18:11 crc kubenswrapper[4858]: I0202 17:18:11.939896 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-68f6ccd7-fxpx6" Feb 02 17:18:11 crc kubenswrapper[4858]: I0202 17:18:11.941383 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-fd756b9df-jz9fc" event={"ID":"fca78be5-9e02-48f5-b575-bf47734fee5b","Type":"ContainerStarted","Data":"91ecdf80964348a5ea57dec0ec3f4ee91df33b89efb3d68dee09d0102647aa77"} Feb 02 17:18:11 crc kubenswrapper[4858]: I0202 17:18:11.941416 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-fd756b9df-jz9fc" event={"ID":"fca78be5-9e02-48f5-b575-bf47734fee5b","Type":"ContainerStarted","Data":"2fa916f3fa64ece9b9c44fa2e5b1e26d1c95532d7489c80fdebfc43b10b683be"} Feb 02 17:18:11 crc kubenswrapper[4858]: I0202 17:18:11.942066 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-fd756b9df-jz9fc" Feb 02 17:18:11 crc kubenswrapper[4858]: I0202 17:18:11.944840 4858 generic.go:334] "Generic (PLEG): container finished" podID="c6a77909-6aaf-4339-84fc-a3121e8d15f3" containerID="a46c52659e3fd2c72417239c582ffbf4d8328132771dd88cf6d23fd70deabdc6" exitCode=0 Feb 02 17:18:11 crc kubenswrapper[4858]: I0202 17:18:11.944949 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cncg2" event={"ID":"c6a77909-6aaf-4339-84fc-a3121e8d15f3","Type":"ContainerDied","Data":"a46c52659e3fd2c72417239c582ffbf4d8328132771dd88cf6d23fd70deabdc6"} Feb 02 17:18:11 crc kubenswrapper[4858]: I0202 17:18:11.944967 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cncg2" Feb 02 17:18:11 crc kubenswrapper[4858]: I0202 17:18:11.945028 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cncg2" event={"ID":"c6a77909-6aaf-4339-84fc-a3121e8d15f3","Type":"ContainerDied","Data":"127aa6a7481c55b4eb899b45d7a0f7f7066d7b29d8d60f2aa7ce625f32469df9"} Feb 02 17:18:11 crc kubenswrapper[4858]: I0202 17:18:11.945056 4858 scope.go:117] "RemoveContainer" containerID="a46c52659e3fd2c72417239c582ffbf4d8328132771dd88cf6d23fd70deabdc6" Feb 02 17:18:11 crc kubenswrapper[4858]: I0202 17:18:11.946848 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-68f6ccd7-fxpx6" Feb 02 17:18:11 crc kubenswrapper[4858]: I0202 17:18:11.961799 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-68f6ccd7-fxpx6" podStartSLOduration=3.961780065 podStartE2EDuration="3.961780065s" podCreationTimestamp="2026-02-02 17:18:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:18:11.957721655 +0000 UTC m=+193.110136920" watchObservedRunningTime="2026-02-02 17:18:11.961780065 +0000 UTC m=+193.114195330" Feb 02 17:18:11 crc kubenswrapper[4858]: I0202 17:18:11.974348 4858 scope.go:117] "RemoveContainer" containerID="b5086601512153ceed395a9960682ffe0d3f29c3b692298248146625bc56c25e" Feb 02 17:18:12 crc kubenswrapper[4858]: I0202 17:18:12.000814 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-fd756b9df-jz9fc" podStartSLOduration=4.000797128 podStartE2EDuration="4.000797128s" podCreationTimestamp="2026-02-02 17:18:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:18:11.979071736 +0000 UTC m=+193.131487021" watchObservedRunningTime="2026-02-02 17:18:12.000797128 +0000 UTC m=+193.153212403" Feb 02 17:18:12 crc kubenswrapper[4858]: I0202 17:18:12.004249 4858 scope.go:117] "RemoveContainer" containerID="f502d1bbd8fa02f03cafe02b39c277f1d510bc99b5f9ac56fbd0d7f287b192d5" Feb 02 17:18:12 crc kubenswrapper[4858]: I0202 17:18:12.009462 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-c9zvf" Feb 02 17:18:12 crc kubenswrapper[4858]: I0202 17:18:12.030657 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cncg2"] Feb 02 17:18:12 crc kubenswrapper[4858]: I0202 17:18:12.041643 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cncg2"] Feb 02 17:18:12 crc kubenswrapper[4858]: I0202 17:18:12.044369 4858 scope.go:117] "RemoveContainer" containerID="a46c52659e3fd2c72417239c582ffbf4d8328132771dd88cf6d23fd70deabdc6" Feb 02 17:18:12 crc kubenswrapper[4858]: E0202 17:18:12.045414 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a46c52659e3fd2c72417239c582ffbf4d8328132771dd88cf6d23fd70deabdc6\": container with ID starting with a46c52659e3fd2c72417239c582ffbf4d8328132771dd88cf6d23fd70deabdc6 not found: ID does not exist" containerID="a46c52659e3fd2c72417239c582ffbf4d8328132771dd88cf6d23fd70deabdc6" Feb 02 17:18:12 crc kubenswrapper[4858]: I0202 17:18:12.045450 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a46c52659e3fd2c72417239c582ffbf4d8328132771dd88cf6d23fd70deabdc6"} err="failed to get container status \"a46c52659e3fd2c72417239c582ffbf4d8328132771dd88cf6d23fd70deabdc6\": rpc error: code = NotFound desc = could not find container \"a46c52659e3fd2c72417239c582ffbf4d8328132771dd88cf6d23fd70deabdc6\": container with ID starting with a46c52659e3fd2c72417239c582ffbf4d8328132771dd88cf6d23fd70deabdc6 not found: ID does not exist" Feb 02 17:18:12 crc kubenswrapper[4858]: I0202 17:18:12.045472 4858 scope.go:117] "RemoveContainer" containerID="b5086601512153ceed395a9960682ffe0d3f29c3b692298248146625bc56c25e" Feb 02 17:18:12 crc kubenswrapper[4858]: E0202 17:18:12.046053 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5086601512153ceed395a9960682ffe0d3f29c3b692298248146625bc56c25e\": container with ID starting with b5086601512153ceed395a9960682ffe0d3f29c3b692298248146625bc56c25e not found: ID does not exist" containerID="b5086601512153ceed395a9960682ffe0d3f29c3b692298248146625bc56c25e" Feb 02 17:18:12 crc kubenswrapper[4858]: I0202 17:18:12.046075 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5086601512153ceed395a9960682ffe0d3f29c3b692298248146625bc56c25e"} err="failed to get container status \"b5086601512153ceed395a9960682ffe0d3f29c3b692298248146625bc56c25e\": rpc error: code = NotFound desc = could not find container \"b5086601512153ceed395a9960682ffe0d3f29c3b692298248146625bc56c25e\": container with ID starting with b5086601512153ceed395a9960682ffe0d3f29c3b692298248146625bc56c25e not found: ID does not exist" Feb 02 17:18:12 crc kubenswrapper[4858]: I0202 17:18:12.046089 4858 scope.go:117] "RemoveContainer" containerID="f502d1bbd8fa02f03cafe02b39c277f1d510bc99b5f9ac56fbd0d7f287b192d5" Feb 02 17:18:12 crc kubenswrapper[4858]: E0202 17:18:12.047025 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f502d1bbd8fa02f03cafe02b39c277f1d510bc99b5f9ac56fbd0d7f287b192d5\": container with ID starting with f502d1bbd8fa02f03cafe02b39c277f1d510bc99b5f9ac56fbd0d7f287b192d5 not found: ID does not exist" containerID="f502d1bbd8fa02f03cafe02b39c277f1d510bc99b5f9ac56fbd0d7f287b192d5" Feb 02 17:18:12 crc kubenswrapper[4858]: I0202 17:18:12.047048 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f502d1bbd8fa02f03cafe02b39c277f1d510bc99b5f9ac56fbd0d7f287b192d5"} err="failed to get container status \"f502d1bbd8fa02f03cafe02b39c277f1d510bc99b5f9ac56fbd0d7f287b192d5\": rpc error: code = NotFound desc = could not find container \"f502d1bbd8fa02f03cafe02b39c277f1d510bc99b5f9ac56fbd0d7f287b192d5\": container with ID starting with f502d1bbd8fa02f03cafe02b39c277f1d510bc99b5f9ac56fbd0d7f287b192d5 not found: ID does not exist" Feb 02 17:18:12 crc kubenswrapper[4858]: I0202 17:18:12.191050 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-fd756b9df-jz9fc" Feb 02 17:18:12 crc kubenswrapper[4858]: I0202 17:18:12.413126 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6a77909-6aaf-4339-84fc-a3121e8d15f3" path="/var/lib/kubelet/pods/c6a77909-6aaf-4339-84fc-a3121e8d15f3/volumes" Feb 02 17:18:12 crc kubenswrapper[4858]: I0202 17:18:12.549963 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 02 17:18:12 crc kubenswrapper[4858]: E0202 17:18:12.550183 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a77909-6aaf-4339-84fc-a3121e8d15f3" containerName="extract-utilities" Feb 02 17:18:12 crc kubenswrapper[4858]: I0202 17:18:12.550194 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a77909-6aaf-4339-84fc-a3121e8d15f3" containerName="extract-utilities" Feb 02 17:18:12 crc kubenswrapper[4858]: E0202 17:18:12.550203 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a77909-6aaf-4339-84fc-a3121e8d15f3" containerName="extract-content" Feb 02 17:18:12 crc kubenswrapper[4858]: I0202 17:18:12.550209 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a77909-6aaf-4339-84fc-a3121e8d15f3" containerName="extract-content" Feb 02 17:18:12 crc kubenswrapper[4858]: E0202 17:18:12.550226 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a77909-6aaf-4339-84fc-a3121e8d15f3" containerName="registry-server" Feb 02 17:18:12 crc kubenswrapper[4858]: I0202 17:18:12.550234 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a77909-6aaf-4339-84fc-a3121e8d15f3" containerName="registry-server" Feb 02 17:18:12 crc kubenswrapper[4858]: I0202 17:18:12.550331 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6a77909-6aaf-4339-84fc-a3121e8d15f3" containerName="registry-server" Feb 02 17:18:12 crc kubenswrapper[4858]: I0202 17:18:12.550929 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 17:18:12 crc kubenswrapper[4858]: I0202 17:18:12.557365 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 02 17:18:12 crc kubenswrapper[4858]: I0202 17:18:12.558478 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 02 17:18:12 crc kubenswrapper[4858]: I0202 17:18:12.560011 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 02 17:18:12 crc kubenswrapper[4858]: I0202 17:18:12.616207 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d9ac9c6-5f6a-47e7-a7a1-e6a222718541-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4d9ac9c6-5f6a-47e7-a7a1-e6a222718541\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 17:18:12 crc kubenswrapper[4858]: I0202 17:18:12.616610 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d9ac9c6-5f6a-47e7-a7a1-e6a222718541-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4d9ac9c6-5f6a-47e7-a7a1-e6a222718541\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 17:18:12 crc kubenswrapper[4858]: I0202 17:18:12.718045 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d9ac9c6-5f6a-47e7-a7a1-e6a222718541-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4d9ac9c6-5f6a-47e7-a7a1-e6a222718541\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 17:18:12 crc kubenswrapper[4858]: I0202 17:18:12.718146 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d9ac9c6-5f6a-47e7-a7a1-e6a222718541-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4d9ac9c6-5f6a-47e7-a7a1-e6a222718541\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 17:18:12 crc kubenswrapper[4858]: I0202 17:18:12.718181 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d9ac9c6-5f6a-47e7-a7a1-e6a222718541-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4d9ac9c6-5f6a-47e7-a7a1-e6a222718541\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 17:18:12 crc kubenswrapper[4858]: I0202 17:18:12.741283 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d9ac9c6-5f6a-47e7-a7a1-e6a222718541-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4d9ac9c6-5f6a-47e7-a7a1-e6a222718541\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 17:18:12 crc kubenswrapper[4858]: I0202 17:18:12.875853 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 17:18:12 crc kubenswrapper[4858]: I0202 17:18:12.955248 4858 generic.go:334] "Generic (PLEG): container finished" podID="58698d7f-881d-44c8-8457-9595f4953b9f" containerID="7ec517605c2db06701d868e634067c43a81171750a9ec9594afc2aa15bf85e09" exitCode=0 Feb 02 17:18:12 crc kubenswrapper[4858]: I0202 17:18:12.955341 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w9kqv" event={"ID":"58698d7f-881d-44c8-8457-9595f4953b9f","Type":"ContainerDied","Data":"7ec517605c2db06701d868e634067c43a81171750a9ec9594afc2aa15bf85e09"} Feb 02 17:18:12 crc kubenswrapper[4858]: I0202 17:18:12.964752 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nj69" event={"ID":"9015cfbb-4091-4598-b5fd-007d2372a89e","Type":"ContainerStarted","Data":"e3f0226f133a7ad5c5f53618f816b8174a98cec6f63973ae077d02290f197c06"} Feb 02 17:18:13 crc kubenswrapper[4858]: I0202 17:18:13.287376 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 02 17:18:13 crc kubenswrapper[4858]: I0202 17:18:13.970406 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4d9ac9c6-5f6a-47e7-a7a1-e6a222718541","Type":"ContainerStarted","Data":"b2c5a0da2cffde412158bf8058526298377e7b6435869847a3f639f25ffe969a"} Feb 02 17:18:13 crc kubenswrapper[4858]: I0202 17:18:13.974049 4858 generic.go:334] "Generic (PLEG): container finished" podID="9015cfbb-4091-4598-b5fd-007d2372a89e" containerID="e3f0226f133a7ad5c5f53618f816b8174a98cec6f63973ae077d02290f197c06" exitCode=0 Feb 02 17:18:13 crc kubenswrapper[4858]: I0202 17:18:13.974146 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nj69" event={"ID":"9015cfbb-4091-4598-b5fd-007d2372a89e","Type":"ContainerDied","Data":"e3f0226f133a7ad5c5f53618f816b8174a98cec6f63973ae077d02290f197c06"} Feb 02 17:18:15 crc kubenswrapper[4858]: I0202 17:18:15.985065 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4d9ac9c6-5f6a-47e7-a7a1-e6a222718541","Type":"ContainerStarted","Data":"c3f6c4459de5eea43bd8b28dd3b1eec84e0b32fa23767eac98e75d9ee40cb8da"} Feb 02 17:18:15 crc kubenswrapper[4858]: I0202 17:18:15.991782 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w9kqv" event={"ID":"58698d7f-881d-44c8-8457-9595f4953b9f","Type":"ContainerStarted","Data":"5b31dc8e24351aae8bd40e1cd316d57bdf939926b63690eca874b5858929df18"} Feb 02 17:18:15 crc kubenswrapper[4858]: I0202 17:18:15.999497 4858 generic.go:334] "Generic (PLEG): container finished" podID="2bd40f69-131b-4d0c-87d9-bfae63f9a4eb" containerID="2f8cdc2381fc948cafc4792a7d58234c999e9b3df46c6aba00258b581eae955a" exitCode=0 Feb 02 17:18:15 crc kubenswrapper[4858]: I0202 17:18:15.999546 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vqhch" event={"ID":"2bd40f69-131b-4d0c-87d9-bfae63f9a4eb","Type":"ContainerDied","Data":"2f8cdc2381fc948cafc4792a7d58234c999e9b3df46c6aba00258b581eae955a"} Feb 02 17:18:16 crc kubenswrapper[4858]: I0202 17:18:16.002665 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=4.002648602 podStartE2EDuration="4.002648602s" podCreationTimestamp="2026-02-02 17:18:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:18:16.00193336 +0000 UTC m=+197.154348635" watchObservedRunningTime="2026-02-02 17:18:16.002648602 +0000 UTC m=+197.155063867" Feb 02 17:18:16 crc kubenswrapper[4858]: I0202 17:18:16.052100 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w9kqv" podStartSLOduration=2.768914788 podStartE2EDuration="47.051916184s" podCreationTimestamp="2026-02-02 17:17:29 +0000 UTC" firstStartedPulling="2026-02-02 17:17:31.300861063 +0000 UTC m=+152.453276328" lastFinishedPulling="2026-02-02 17:18:15.583862459 +0000 UTC m=+196.736277724" observedRunningTime="2026-02-02 17:18:16.050266831 +0000 UTC m=+197.202682106" watchObservedRunningTime="2026-02-02 17:18:16.051916184 +0000 UTC m=+197.204331449" Feb 02 17:18:17 crc kubenswrapper[4858]: I0202 17:18:17.007160 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nj69" event={"ID":"9015cfbb-4091-4598-b5fd-007d2372a89e","Type":"ContainerStarted","Data":"62cdfb76ed077010d6dcf3ee7b6ee0223d661ccdf07bf1fcea32a14841fb23c2"} Feb 02 17:18:17 crc kubenswrapper[4858]: I0202 17:18:17.009150 4858 generic.go:334] "Generic (PLEG): container finished" podID="4d9ac9c6-5f6a-47e7-a7a1-e6a222718541" containerID="c3f6c4459de5eea43bd8b28dd3b1eec84e0b32fa23767eac98e75d9ee40cb8da" exitCode=0 Feb 02 17:18:17 crc kubenswrapper[4858]: I0202 17:18:17.009306 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4d9ac9c6-5f6a-47e7-a7a1-e6a222718541","Type":"ContainerDied","Data":"c3f6c4459de5eea43bd8b28dd3b1eec84e0b32fa23767eac98e75d9ee40cb8da"} Feb 02 17:18:17 crc kubenswrapper[4858]: I0202 17:18:17.026192 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5nj69" podStartSLOduration=4.6525837150000005 podStartE2EDuration="47.026172998s" podCreationTimestamp="2026-02-02 17:17:30 +0000 UTC" firstStartedPulling="2026-02-02 17:17:33.561413238 +0000 UTC m=+154.713828503" lastFinishedPulling="2026-02-02 17:18:15.935002521 +0000 UTC m=+197.087417786" observedRunningTime="2026-02-02 17:18:17.025496556 +0000 UTC m=+198.177911831" watchObservedRunningTime="2026-02-02 17:18:17.026172998 +0000 UTC m=+198.178588273" Feb 02 17:18:18 crc kubenswrapper[4858]: I0202 17:18:18.381280 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 17:18:18 crc kubenswrapper[4858]: I0202 17:18:18.515996 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d9ac9c6-5f6a-47e7-a7a1-e6a222718541-kube-api-access\") pod \"4d9ac9c6-5f6a-47e7-a7a1-e6a222718541\" (UID: \"4d9ac9c6-5f6a-47e7-a7a1-e6a222718541\") " Feb 02 17:18:18 crc kubenswrapper[4858]: I0202 17:18:18.516076 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d9ac9c6-5f6a-47e7-a7a1-e6a222718541-kubelet-dir\") pod \"4d9ac9c6-5f6a-47e7-a7a1-e6a222718541\" (UID: \"4d9ac9c6-5f6a-47e7-a7a1-e6a222718541\") " Feb 02 17:18:18 crc kubenswrapper[4858]: I0202 17:18:18.516438 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d9ac9c6-5f6a-47e7-a7a1-e6a222718541-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4d9ac9c6-5f6a-47e7-a7a1-e6a222718541" (UID: "4d9ac9c6-5f6a-47e7-a7a1-e6a222718541"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 17:18:18 crc kubenswrapper[4858]: I0202 17:18:18.523108 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d9ac9c6-5f6a-47e7-a7a1-e6a222718541-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4d9ac9c6-5f6a-47e7-a7a1-e6a222718541" (UID: "4d9ac9c6-5f6a-47e7-a7a1-e6a222718541"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:18:18 crc kubenswrapper[4858]: I0202 17:18:18.617927 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d9ac9c6-5f6a-47e7-a7a1-e6a222718541-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:18 crc kubenswrapper[4858]: I0202 17:18:18.618010 4858 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d9ac9c6-5f6a-47e7-a7a1-e6a222718541-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:19 crc kubenswrapper[4858]: I0202 17:18:19.023792 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4d9ac9c6-5f6a-47e7-a7a1-e6a222718541","Type":"ContainerDied","Data":"b2c5a0da2cffde412158bf8058526298377e7b6435869847a3f639f25ffe969a"} Feb 02 17:18:19 crc kubenswrapper[4858]: I0202 17:18:19.024066 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2c5a0da2cffde412158bf8058526298377e7b6435869847a3f639f25ffe969a" Feb 02 17:18:19 crc kubenswrapper[4858]: I0202 17:18:19.023851 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 17:18:20 crc kubenswrapper[4858]: I0202 17:18:20.344641 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 02 17:18:20 crc kubenswrapper[4858]: E0202 17:18:20.344925 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d9ac9c6-5f6a-47e7-a7a1-e6a222718541" containerName="pruner" Feb 02 17:18:20 crc kubenswrapper[4858]: I0202 17:18:20.344940 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d9ac9c6-5f6a-47e7-a7a1-e6a222718541" containerName="pruner" Feb 02 17:18:20 crc kubenswrapper[4858]: I0202 17:18:20.345316 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d9ac9c6-5f6a-47e7-a7a1-e6a222718541" containerName="pruner" Feb 02 17:18:20 crc kubenswrapper[4858]: I0202 17:18:20.345732 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 17:18:20 crc kubenswrapper[4858]: I0202 17:18:20.364438 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 02 17:18:20 crc kubenswrapper[4858]: I0202 17:18:20.365865 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 02 17:18:20 crc kubenswrapper[4858]: I0202 17:18:20.370241 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w9kqv" Feb 02 17:18:20 crc kubenswrapper[4858]: I0202 17:18:20.370450 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w9kqv" Feb 02 17:18:20 crc kubenswrapper[4858]: I0202 17:18:20.377901 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 02 17:18:20 crc kubenswrapper[4858]: I0202 17:18:20.414749 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w9kqv" Feb 02 17:18:20 crc kubenswrapper[4858]: I0202 17:18:20.542854 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/85579d4b-0219-4f36-8251-755e28bbe3ba-var-lock\") pod \"installer-9-crc\" (UID: \"85579d4b-0219-4f36-8251-755e28bbe3ba\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 17:18:20 crc kubenswrapper[4858]: I0202 17:18:20.542930 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85579d4b-0219-4f36-8251-755e28bbe3ba-kube-api-access\") pod \"installer-9-crc\" (UID: \"85579d4b-0219-4f36-8251-755e28bbe3ba\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 17:18:20 crc kubenswrapper[4858]: I0202 17:18:20.542964 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/85579d4b-0219-4f36-8251-755e28bbe3ba-kubelet-dir\") pod \"installer-9-crc\" (UID: \"85579d4b-0219-4f36-8251-755e28bbe3ba\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 17:18:20 crc kubenswrapper[4858]: I0202 17:18:20.645149 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/85579d4b-0219-4f36-8251-755e28bbe3ba-var-lock\") pod \"installer-9-crc\" (UID: \"85579d4b-0219-4f36-8251-755e28bbe3ba\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 17:18:20 crc kubenswrapper[4858]: I0202 17:18:20.645213 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85579d4b-0219-4f36-8251-755e28bbe3ba-kube-api-access\") pod \"installer-9-crc\" (UID: \"85579d4b-0219-4f36-8251-755e28bbe3ba\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 17:18:20 crc kubenswrapper[4858]: I0202 17:18:20.645243 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/85579d4b-0219-4f36-8251-755e28bbe3ba-kubelet-dir\") pod \"installer-9-crc\" (UID: \"85579d4b-0219-4f36-8251-755e28bbe3ba\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 17:18:20 crc kubenswrapper[4858]: I0202 17:18:20.645321 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/85579d4b-0219-4f36-8251-755e28bbe3ba-kubelet-dir\") pod \"installer-9-crc\" (UID: \"85579d4b-0219-4f36-8251-755e28bbe3ba\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 17:18:20 crc kubenswrapper[4858]: I0202 17:18:20.645344 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/85579d4b-0219-4f36-8251-755e28bbe3ba-var-lock\") pod \"installer-9-crc\" (UID: \"85579d4b-0219-4f36-8251-755e28bbe3ba\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 17:18:20 crc kubenswrapper[4858]: I0202 17:18:20.666161 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85579d4b-0219-4f36-8251-755e28bbe3ba-kube-api-access\") pod \"installer-9-crc\" (UID: \"85579d4b-0219-4f36-8251-755e28bbe3ba\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 17:18:20 crc kubenswrapper[4858]: I0202 17:18:20.673503 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 17:18:21 crc kubenswrapper[4858]: I0202 17:18:21.086472 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w9kqv" Feb 02 17:18:21 crc kubenswrapper[4858]: I0202 17:18:21.338593 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5nj69" Feb 02 17:18:21 crc kubenswrapper[4858]: I0202 17:18:21.338674 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5nj69" Feb 02 17:18:22 crc kubenswrapper[4858]: I0202 17:18:22.376056 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5nj69" podUID="9015cfbb-4091-4598-b5fd-007d2372a89e" containerName="registry-server" probeResult="failure" output=< Feb 02 17:18:22 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Feb 02 17:18:22 crc kubenswrapper[4858]: > Feb 02 17:18:24 crc kubenswrapper[4858]: I0202 17:18:24.204946 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w9kqv"] Feb 02 17:18:24 crc kubenswrapper[4858]: I0202 17:18:24.205381 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w9kqv" podUID="58698d7f-881d-44c8-8457-9595f4953b9f" containerName="registry-server" containerID="cri-o://5b31dc8e24351aae8bd40e1cd316d57bdf939926b63690eca874b5858929df18" gracePeriod=2 Feb 02 17:18:25 crc kubenswrapper[4858]: I0202 17:18:25.057776 4858 generic.go:334] "Generic (PLEG): container finished" podID="58698d7f-881d-44c8-8457-9595f4953b9f" containerID="5b31dc8e24351aae8bd40e1cd316d57bdf939926b63690eca874b5858929df18" exitCode=0 Feb 02 17:18:25 crc kubenswrapper[4858]: I0202 17:18:25.058000 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w9kqv" event={"ID":"58698d7f-881d-44c8-8457-9595f4953b9f","Type":"ContainerDied","Data":"5b31dc8e24351aae8bd40e1cd316d57bdf939926b63690eca874b5858929df18"} Feb 02 17:18:25 crc kubenswrapper[4858]: I0202 17:18:25.284419 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 02 17:18:26 crc kubenswrapper[4858]: I0202 17:18:26.030782 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w9kqv" Feb 02 17:18:26 crc kubenswrapper[4858]: I0202 17:18:26.067485 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7x482" event={"ID":"69eb2d24-ee9f-4ef2-8bf0-233099196e0d","Type":"ContainerStarted","Data":"138846b9e2a32cf9e6ef2c0f35a9d4b7da1ebc8c33fcc4be0d7966c80a6f88fc"} Feb 02 17:18:26 crc kubenswrapper[4858]: I0202 17:18:26.070435 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vqhch" event={"ID":"2bd40f69-131b-4d0c-87d9-bfae63f9a4eb","Type":"ContainerStarted","Data":"8bfdf407314bd7b21d52c94dd7d2f4f4581db7ddfd43b727d43289e9bf141d58"} Feb 02 17:18:26 crc kubenswrapper[4858]: I0202 17:18:26.073377 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"85579d4b-0219-4f36-8251-755e28bbe3ba","Type":"ContainerStarted","Data":"c8558e1baececfa493b81898a267feecc1d4b0e1a661f4f46aa25821b220155a"} Feb 02 17:18:26 crc kubenswrapper[4858]: I0202 17:18:26.073409 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"85579d4b-0219-4f36-8251-755e28bbe3ba","Type":"ContainerStarted","Data":"19e7f7005a7325387f5d905a5fff00431048c06d2f3520536b801ad6ae49424c"} Feb 02 17:18:26 crc kubenswrapper[4858]: I0202 17:18:26.075138 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w9kqv" event={"ID":"58698d7f-881d-44c8-8457-9595f4953b9f","Type":"ContainerDied","Data":"14b83c8542f684115929ade11c8b97b9199c96b1faee859e7a2e2b5e7030382f"} Feb 02 17:18:26 crc kubenswrapper[4858]: I0202 17:18:26.075174 4858 scope.go:117] "RemoveContainer" containerID="5b31dc8e24351aae8bd40e1cd316d57bdf939926b63690eca874b5858929df18" Feb 02 17:18:26 crc kubenswrapper[4858]: I0202 17:18:26.075269 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w9kqv" Feb 02 17:18:26 crc kubenswrapper[4858]: I0202 17:18:26.093183 4858 scope.go:117] "RemoveContainer" containerID="7ec517605c2db06701d868e634067c43a81171750a9ec9594afc2aa15bf85e09" Feb 02 17:18:26 crc kubenswrapper[4858]: I0202 17:18:26.107534 4858 scope.go:117] "RemoveContainer" containerID="f923701e0d18c12a700216f320130359c30b5efe9c183d33d4c24580d7b0e0a8" Feb 02 17:18:26 crc kubenswrapper[4858]: I0202 17:18:26.114539 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=6.114498565 podStartE2EDuration="6.114498565s" podCreationTimestamp="2026-02-02 17:18:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:18:26.109207045 +0000 UTC m=+207.261622320" watchObservedRunningTime="2026-02-02 17:18:26.114498565 +0000 UTC m=+207.266913830" Feb 02 17:18:26 crc kubenswrapper[4858]: I0202 17:18:26.129639 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vqhch" podStartSLOduration=4.411173209 podStartE2EDuration="59.129618461s" podCreationTimestamp="2026-02-02 17:17:27 +0000 UTC" firstStartedPulling="2026-02-02 17:17:30.210809615 +0000 UTC m=+151.363224880" lastFinishedPulling="2026-02-02 17:18:24.929254877 +0000 UTC m=+206.081670132" observedRunningTime="2026-02-02 17:18:26.129532488 +0000 UTC m=+207.281947763" watchObservedRunningTime="2026-02-02 17:18:26.129618461 +0000 UTC m=+207.282033726" Feb 02 17:18:26 crc kubenswrapper[4858]: I0202 17:18:26.231625 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58698d7f-881d-44c8-8457-9595f4953b9f-catalog-content\") pod \"58698d7f-881d-44c8-8457-9595f4953b9f\" (UID: \"58698d7f-881d-44c8-8457-9595f4953b9f\") " Feb 02 17:18:26 crc kubenswrapper[4858]: I0202 17:18:26.231670 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58698d7f-881d-44c8-8457-9595f4953b9f-utilities\") pod \"58698d7f-881d-44c8-8457-9595f4953b9f\" (UID: \"58698d7f-881d-44c8-8457-9595f4953b9f\") " Feb 02 17:18:26 crc kubenswrapper[4858]: I0202 17:18:26.231751 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgg8w\" (UniqueName: \"kubernetes.io/projected/58698d7f-881d-44c8-8457-9595f4953b9f-kube-api-access-dgg8w\") pod \"58698d7f-881d-44c8-8457-9595f4953b9f\" (UID: \"58698d7f-881d-44c8-8457-9595f4953b9f\") " Feb 02 17:18:26 crc kubenswrapper[4858]: I0202 17:18:26.233212 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58698d7f-881d-44c8-8457-9595f4953b9f-utilities" (OuterVolumeSpecName: "utilities") pod "58698d7f-881d-44c8-8457-9595f4953b9f" (UID: "58698d7f-881d-44c8-8457-9595f4953b9f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:18:26 crc kubenswrapper[4858]: I0202 17:18:26.238823 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58698d7f-881d-44c8-8457-9595f4953b9f-kube-api-access-dgg8w" (OuterVolumeSpecName: "kube-api-access-dgg8w") pod "58698d7f-881d-44c8-8457-9595f4953b9f" (UID: "58698d7f-881d-44c8-8457-9595f4953b9f"). InnerVolumeSpecName "kube-api-access-dgg8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:18:26 crc kubenswrapper[4858]: I0202 17:18:26.253117 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58698d7f-881d-44c8-8457-9595f4953b9f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "58698d7f-881d-44c8-8457-9595f4953b9f" (UID: "58698d7f-881d-44c8-8457-9595f4953b9f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:18:26 crc kubenswrapper[4858]: I0202 17:18:26.333158 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgg8w\" (UniqueName: \"kubernetes.io/projected/58698d7f-881d-44c8-8457-9595f4953b9f-kube-api-access-dgg8w\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:26 crc kubenswrapper[4858]: I0202 17:18:26.333203 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58698d7f-881d-44c8-8457-9595f4953b9f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:26 crc kubenswrapper[4858]: I0202 17:18:26.333217 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58698d7f-881d-44c8-8457-9595f4953b9f-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:26 crc kubenswrapper[4858]: I0202 17:18:26.448000 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w9kqv"] Feb 02 17:18:26 crc kubenswrapper[4858]: I0202 17:18:26.450800 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w9kqv"] Feb 02 17:18:27 crc kubenswrapper[4858]: I0202 17:18:27.082046 4858 generic.go:334] "Generic (PLEG): container finished" podID="69eb2d24-ee9f-4ef2-8bf0-233099196e0d" containerID="138846b9e2a32cf9e6ef2c0f35a9d4b7da1ebc8c33fcc4be0d7966c80a6f88fc" exitCode=0 Feb 02 17:18:27 crc kubenswrapper[4858]: I0202 17:18:27.082091 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7x482" event={"ID":"69eb2d24-ee9f-4ef2-8bf0-233099196e0d","Type":"ContainerDied","Data":"138846b9e2a32cf9e6ef2c0f35a9d4b7da1ebc8c33fcc4be0d7966c80a6f88fc"} Feb 02 17:18:27 crc kubenswrapper[4858]: I0202 17:18:27.808409 4858 patch_prober.go:28] interesting pod/machine-config-daemon-lbvl2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 17:18:27 crc kubenswrapper[4858]: I0202 17:18:27.810170 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 17:18:27 crc kubenswrapper[4858]: I0202 17:18:27.810273 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" Feb 02 17:18:27 crc kubenswrapper[4858]: I0202 17:18:27.811313 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a3060eb52c9c081c561e2b97588ea0418eadc0e227307715ef71d48fb42bbf57"} pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 17:18:27 crc kubenswrapper[4858]: I0202 17:18:27.811444 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" containerID="cri-o://a3060eb52c9c081c561e2b97588ea0418eadc0e227307715ef71d48fb42bbf57" gracePeriod=600 Feb 02 17:18:28 crc kubenswrapper[4858]: I0202 17:18:28.092748 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7x482" event={"ID":"69eb2d24-ee9f-4ef2-8bf0-233099196e0d","Type":"ContainerStarted","Data":"3ee37aa884ece2c00bc0f392fbdbd0178303c1b150463395ed72eb5277feaed5"} Feb 02 17:18:28 crc kubenswrapper[4858]: I0202 17:18:28.096095 4858 generic.go:334] "Generic (PLEG): container finished" podID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerID="a3060eb52c9c081c561e2b97588ea0418eadc0e227307715ef71d48fb42bbf57" exitCode=0 Feb 02 17:18:28 crc kubenswrapper[4858]: I0202 17:18:28.096128 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" event={"ID":"d03a4872-ca6a-4233-bdbf-b31f7890dc3e","Type":"ContainerDied","Data":"a3060eb52c9c081c561e2b97588ea0418eadc0e227307715ef71d48fb42bbf57"} Feb 02 17:18:28 crc kubenswrapper[4858]: I0202 17:18:28.344550 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vqhch" Feb 02 17:18:28 crc kubenswrapper[4858]: I0202 17:18:28.344761 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vqhch" Feb 02 17:18:28 crc kubenswrapper[4858]: I0202 17:18:28.387332 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vqhch" Feb 02 17:18:28 crc kubenswrapper[4858]: I0202 17:18:28.407494 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58698d7f-881d-44c8-8457-9595f4953b9f" path="/var/lib/kubelet/pods/58698d7f-881d-44c8-8457-9595f4953b9f/volumes" Feb 02 17:18:28 crc kubenswrapper[4858]: I0202 17:18:28.818294 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-68f6ccd7-fxpx6"] Feb 02 17:18:28 crc kubenswrapper[4858]: I0202 17:18:28.818785 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-68f6ccd7-fxpx6" podUID="68b434d7-79a1-4449-ba3e-2615931cc0b4" containerName="controller-manager" containerID="cri-o://4511165cef0047e1a49420ac7d57fcaa28413f2b2d9c766603d0153425a6d7a4" gracePeriod=30 Feb 02 17:18:28 crc kubenswrapper[4858]: I0202 17:18:28.839051 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fd756b9df-jz9fc"] Feb 02 17:18:28 crc kubenswrapper[4858]: I0202 17:18:28.839265 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-fd756b9df-jz9fc" podUID="fca78be5-9e02-48f5-b575-bf47734fee5b" containerName="route-controller-manager" containerID="cri-o://91ecdf80964348a5ea57dec0ec3f4ee91df33b89efb3d68dee09d0102647aa77" gracePeriod=30 Feb 02 17:18:29 crc kubenswrapper[4858]: I0202 17:18:29.105249 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" event={"ID":"d03a4872-ca6a-4233-bdbf-b31f7890dc3e","Type":"ContainerStarted","Data":"53c039250f690ce1254a34f24b2227f388a22d8e62f92b86cf497d453228deae"} Feb 02 17:18:29 crc kubenswrapper[4858]: I0202 17:18:29.138859 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7x482" podStartSLOduration=3.445977758 podStartE2EDuration="1m2.138840842s" podCreationTimestamp="2026-02-02 17:17:27 +0000 UTC" firstStartedPulling="2026-02-02 17:17:29.16139275 +0000 UTC m=+150.313808025" lastFinishedPulling="2026-02-02 17:18:27.854255834 +0000 UTC m=+209.006671109" observedRunningTime="2026-02-02 17:18:29.136466375 +0000 UTC m=+210.288881670" watchObservedRunningTime="2026-02-02 17:18:29.138840842 +0000 UTC m=+210.291256117" Feb 02 17:18:29 crc kubenswrapper[4858]: I0202 17:18:29.899681 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fd756b9df-jz9fc" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.004749 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68f6ccd7-fxpx6" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.079232 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fca78be5-9e02-48f5-b575-bf47734fee5b-client-ca\") pod \"fca78be5-9e02-48f5-b575-bf47734fee5b\" (UID: \"fca78be5-9e02-48f5-b575-bf47734fee5b\") " Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.079284 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fca78be5-9e02-48f5-b575-bf47734fee5b-serving-cert\") pod \"fca78be5-9e02-48f5-b575-bf47734fee5b\" (UID: \"fca78be5-9e02-48f5-b575-bf47734fee5b\") " Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.079350 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rhr8\" (UniqueName: \"kubernetes.io/projected/fca78be5-9e02-48f5-b575-bf47734fee5b-kube-api-access-5rhr8\") pod \"fca78be5-9e02-48f5-b575-bf47734fee5b\" (UID: \"fca78be5-9e02-48f5-b575-bf47734fee5b\") " Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.079385 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fca78be5-9e02-48f5-b575-bf47734fee5b-config\") pod \"fca78be5-9e02-48f5-b575-bf47734fee5b\" (UID: \"fca78be5-9e02-48f5-b575-bf47734fee5b\") " Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.080208 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fca78be5-9e02-48f5-b575-bf47734fee5b-client-ca" (OuterVolumeSpecName: "client-ca") pod "fca78be5-9e02-48f5-b575-bf47734fee5b" (UID: "fca78be5-9e02-48f5-b575-bf47734fee5b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.080529 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fca78be5-9e02-48f5-b575-bf47734fee5b-config" (OuterVolumeSpecName: "config") pod "fca78be5-9e02-48f5-b575-bf47734fee5b" (UID: "fca78be5-9e02-48f5-b575-bf47734fee5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.084749 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fca78be5-9e02-48f5-b575-bf47734fee5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fca78be5-9e02-48f5-b575-bf47734fee5b" (UID: "fca78be5-9e02-48f5-b575-bf47734fee5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.085096 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fca78be5-9e02-48f5-b575-bf47734fee5b-kube-api-access-5rhr8" (OuterVolumeSpecName: "kube-api-access-5rhr8") pod "fca78be5-9e02-48f5-b575-bf47734fee5b" (UID: "fca78be5-9e02-48f5-b575-bf47734fee5b"). InnerVolumeSpecName "kube-api-access-5rhr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.112114 4858 generic.go:334] "Generic (PLEG): container finished" podID="68b434d7-79a1-4449-ba3e-2615931cc0b4" containerID="4511165cef0047e1a49420ac7d57fcaa28413f2b2d9c766603d0153425a6d7a4" exitCode=0 Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.112196 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68f6ccd7-fxpx6" event={"ID":"68b434d7-79a1-4449-ba3e-2615931cc0b4","Type":"ContainerDied","Data":"4511165cef0047e1a49420ac7d57fcaa28413f2b2d9c766603d0153425a6d7a4"} Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.112229 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68f6ccd7-fxpx6" event={"ID":"68b434d7-79a1-4449-ba3e-2615931cc0b4","Type":"ContainerDied","Data":"d2fbd6e2c06251f8b22402617a724358f2ecd519654890a8097f7b623f8032e9"} Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.112240 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68f6ccd7-fxpx6" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.112249 4858 scope.go:117] "RemoveContainer" containerID="4511165cef0047e1a49420ac7d57fcaa28413f2b2d9c766603d0153425a6d7a4" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.114494 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fd756b9df-jz9fc" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.115728 4858 generic.go:334] "Generic (PLEG): container finished" podID="fca78be5-9e02-48f5-b575-bf47734fee5b" containerID="91ecdf80964348a5ea57dec0ec3f4ee91df33b89efb3d68dee09d0102647aa77" exitCode=0 Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.115797 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-fd756b9df-jz9fc" event={"ID":"fca78be5-9e02-48f5-b575-bf47734fee5b","Type":"ContainerDied","Data":"91ecdf80964348a5ea57dec0ec3f4ee91df33b89efb3d68dee09d0102647aa77"} Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.115894 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-fd756b9df-jz9fc" event={"ID":"fca78be5-9e02-48f5-b575-bf47734fee5b","Type":"ContainerDied","Data":"2fa916f3fa64ece9b9c44fa2e5b1e26d1c95532d7489c80fdebfc43b10b683be"} Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.128758 4858 scope.go:117] "RemoveContainer" containerID="4511165cef0047e1a49420ac7d57fcaa28413f2b2d9c766603d0153425a6d7a4" Feb 02 17:18:30 crc kubenswrapper[4858]: E0202 17:18:30.129668 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4511165cef0047e1a49420ac7d57fcaa28413f2b2d9c766603d0153425a6d7a4\": container with ID starting with 4511165cef0047e1a49420ac7d57fcaa28413f2b2d9c766603d0153425a6d7a4 not found: ID does not exist" containerID="4511165cef0047e1a49420ac7d57fcaa28413f2b2d9c766603d0153425a6d7a4" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.129707 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4511165cef0047e1a49420ac7d57fcaa28413f2b2d9c766603d0153425a6d7a4"} err="failed to get container status \"4511165cef0047e1a49420ac7d57fcaa28413f2b2d9c766603d0153425a6d7a4\": rpc error: code = NotFound desc = could not find container \"4511165cef0047e1a49420ac7d57fcaa28413f2b2d9c766603d0153425a6d7a4\": container with ID starting with 4511165cef0047e1a49420ac7d57fcaa28413f2b2d9c766603d0153425a6d7a4 not found: ID does not exist" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.129740 4858 scope.go:117] "RemoveContainer" containerID="91ecdf80964348a5ea57dec0ec3f4ee91df33b89efb3d68dee09d0102647aa77" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.142138 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fd756b9df-jz9fc"] Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.147143 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fd756b9df-jz9fc"] Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.150899 4858 scope.go:117] "RemoveContainer" containerID="91ecdf80964348a5ea57dec0ec3f4ee91df33b89efb3d68dee09d0102647aa77" Feb 02 17:18:30 crc kubenswrapper[4858]: E0202 17:18:30.151345 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91ecdf80964348a5ea57dec0ec3f4ee91df33b89efb3d68dee09d0102647aa77\": container with ID starting with 91ecdf80964348a5ea57dec0ec3f4ee91df33b89efb3d68dee09d0102647aa77 not found: ID does not exist" containerID="91ecdf80964348a5ea57dec0ec3f4ee91df33b89efb3d68dee09d0102647aa77" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.151382 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91ecdf80964348a5ea57dec0ec3f4ee91df33b89efb3d68dee09d0102647aa77"} err="failed to get container status \"91ecdf80964348a5ea57dec0ec3f4ee91df33b89efb3d68dee09d0102647aa77\": rpc error: code = NotFound desc = could not find container \"91ecdf80964348a5ea57dec0ec3f4ee91df33b89efb3d68dee09d0102647aa77\": container with ID starting with 91ecdf80964348a5ea57dec0ec3f4ee91df33b89efb3d68dee09d0102647aa77 not found: ID does not exist" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.156137 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vqhch" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.180609 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/68b434d7-79a1-4449-ba3e-2615931cc0b4-proxy-ca-bundles\") pod \"68b434d7-79a1-4449-ba3e-2615931cc0b4\" (UID: \"68b434d7-79a1-4449-ba3e-2615931cc0b4\") " Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.180875 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68b434d7-79a1-4449-ba3e-2615931cc0b4-serving-cert\") pod \"68b434d7-79a1-4449-ba3e-2615931cc0b4\" (UID: \"68b434d7-79a1-4449-ba3e-2615931cc0b4\") " Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.180937 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/68b434d7-79a1-4449-ba3e-2615931cc0b4-client-ca\") pod \"68b434d7-79a1-4449-ba3e-2615931cc0b4\" (UID: \"68b434d7-79a1-4449-ba3e-2615931cc0b4\") " Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.180989 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68b434d7-79a1-4449-ba3e-2615931cc0b4-config\") pod \"68b434d7-79a1-4449-ba3e-2615931cc0b4\" (UID: \"68b434d7-79a1-4449-ba3e-2615931cc0b4\") " Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.181063 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpgq6\" (UniqueName: \"kubernetes.io/projected/68b434d7-79a1-4449-ba3e-2615931cc0b4-kube-api-access-bpgq6\") pod \"68b434d7-79a1-4449-ba3e-2615931cc0b4\" (UID: \"68b434d7-79a1-4449-ba3e-2615931cc0b4\") " Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.181402 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rhr8\" (UniqueName: \"kubernetes.io/projected/fca78be5-9e02-48f5-b575-bf47734fee5b-kube-api-access-5rhr8\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.181425 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fca78be5-9e02-48f5-b575-bf47734fee5b-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.181437 4858 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fca78be5-9e02-48f5-b575-bf47734fee5b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.181449 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fca78be5-9e02-48f5-b575-bf47734fee5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.181510 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68b434d7-79a1-4449-ba3e-2615931cc0b4-client-ca" (OuterVolumeSpecName: "client-ca") pod "68b434d7-79a1-4449-ba3e-2615931cc0b4" (UID: "68b434d7-79a1-4449-ba3e-2615931cc0b4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.181699 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68b434d7-79a1-4449-ba3e-2615931cc0b4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "68b434d7-79a1-4449-ba3e-2615931cc0b4" (UID: "68b434d7-79a1-4449-ba3e-2615931cc0b4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.181756 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68b434d7-79a1-4449-ba3e-2615931cc0b4-config" (OuterVolumeSpecName: "config") pod "68b434d7-79a1-4449-ba3e-2615931cc0b4" (UID: "68b434d7-79a1-4449-ba3e-2615931cc0b4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.184682 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68b434d7-79a1-4449-ba3e-2615931cc0b4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "68b434d7-79a1-4449-ba3e-2615931cc0b4" (UID: "68b434d7-79a1-4449-ba3e-2615931cc0b4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.185073 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68b434d7-79a1-4449-ba3e-2615931cc0b4-kube-api-access-bpgq6" (OuterVolumeSpecName: "kube-api-access-bpgq6") pod "68b434d7-79a1-4449-ba3e-2615931cc0b4" (UID: "68b434d7-79a1-4449-ba3e-2615931cc0b4"). InnerVolumeSpecName "kube-api-access-bpgq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.282038 4858 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/68b434d7-79a1-4449-ba3e-2615931cc0b4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.282324 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/68b434d7-79a1-4449-ba3e-2615931cc0b4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.282394 4858 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/68b434d7-79a1-4449-ba3e-2615931cc0b4-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.282464 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68b434d7-79a1-4449-ba3e-2615931cc0b4-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.282533 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpgq6\" (UniqueName: \"kubernetes.io/projected/68b434d7-79a1-4449-ba3e-2615931cc0b4-kube-api-access-bpgq6\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.414170 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fca78be5-9e02-48f5-b575-bf47734fee5b" path="/var/lib/kubelet/pods/fca78be5-9e02-48f5-b575-bf47734fee5b/volumes" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.462274 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-68f6ccd7-fxpx6"] Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.465441 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-68f6ccd7-fxpx6"] Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.645586 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b8989f9dc-vnswc"] Feb 02 17:18:30 crc kubenswrapper[4858]: E0202 17:18:30.645813 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58698d7f-881d-44c8-8457-9595f4953b9f" containerName="registry-server" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.645835 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="58698d7f-881d-44c8-8457-9595f4953b9f" containerName="registry-server" Feb 02 17:18:30 crc kubenswrapper[4858]: E0202 17:18:30.645849 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58698d7f-881d-44c8-8457-9595f4953b9f" containerName="extract-content" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.645858 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="58698d7f-881d-44c8-8457-9595f4953b9f" containerName="extract-content" Feb 02 17:18:30 crc kubenswrapper[4858]: E0202 17:18:30.645872 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68b434d7-79a1-4449-ba3e-2615931cc0b4" containerName="controller-manager" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.645880 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="68b434d7-79a1-4449-ba3e-2615931cc0b4" containerName="controller-manager" Feb 02 17:18:30 crc kubenswrapper[4858]: E0202 17:18:30.645888 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fca78be5-9e02-48f5-b575-bf47734fee5b" containerName="route-controller-manager" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.645909 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="fca78be5-9e02-48f5-b575-bf47734fee5b" containerName="route-controller-manager" Feb 02 17:18:30 crc kubenswrapper[4858]: E0202 17:18:30.645926 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58698d7f-881d-44c8-8457-9595f4953b9f" containerName="extract-utilities" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.645934 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="58698d7f-881d-44c8-8457-9595f4953b9f" containerName="extract-utilities" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.646078 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="58698d7f-881d-44c8-8457-9595f4953b9f" containerName="registry-server" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.646091 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="fca78be5-9e02-48f5-b575-bf47734fee5b" containerName="route-controller-manager" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.646105 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="68b434d7-79a1-4449-ba3e-2615931cc0b4" containerName="controller-manager" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.646545 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b8989f9dc-vnswc" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.648534 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.648860 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.649208 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.649472 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.651434 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.655171 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.662873 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b8989f9dc-vnswc"] Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.788452 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5addafbe-0ed8-44c4-a510-eb260b9c8149-serving-cert\") pod \"route-controller-manager-5b8989f9dc-vnswc\" (UID: \"5addafbe-0ed8-44c4-a510-eb260b9c8149\") " pod="openshift-route-controller-manager/route-controller-manager-5b8989f9dc-vnswc" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.788674 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gq4g\" (UniqueName: \"kubernetes.io/projected/5addafbe-0ed8-44c4-a510-eb260b9c8149-kube-api-access-5gq4g\") pod \"route-controller-manager-5b8989f9dc-vnswc\" (UID: \"5addafbe-0ed8-44c4-a510-eb260b9c8149\") " pod="openshift-route-controller-manager/route-controller-manager-5b8989f9dc-vnswc" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.788769 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5addafbe-0ed8-44c4-a510-eb260b9c8149-config\") pod \"route-controller-manager-5b8989f9dc-vnswc\" (UID: \"5addafbe-0ed8-44c4-a510-eb260b9c8149\") " pod="openshift-route-controller-manager/route-controller-manager-5b8989f9dc-vnswc" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.788952 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5addafbe-0ed8-44c4-a510-eb260b9c8149-client-ca\") pod \"route-controller-manager-5b8989f9dc-vnswc\" (UID: \"5addafbe-0ed8-44c4-a510-eb260b9c8149\") " pod="openshift-route-controller-manager/route-controller-manager-5b8989f9dc-vnswc" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.889918 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5addafbe-0ed8-44c4-a510-eb260b9c8149-serving-cert\") pod \"route-controller-manager-5b8989f9dc-vnswc\" (UID: \"5addafbe-0ed8-44c4-a510-eb260b9c8149\") " pod="openshift-route-controller-manager/route-controller-manager-5b8989f9dc-vnswc" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.890382 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gq4g\" (UniqueName: \"kubernetes.io/projected/5addafbe-0ed8-44c4-a510-eb260b9c8149-kube-api-access-5gq4g\") pod \"route-controller-manager-5b8989f9dc-vnswc\" (UID: \"5addafbe-0ed8-44c4-a510-eb260b9c8149\") " pod="openshift-route-controller-manager/route-controller-manager-5b8989f9dc-vnswc" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.890614 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5addafbe-0ed8-44c4-a510-eb260b9c8149-config\") pod \"route-controller-manager-5b8989f9dc-vnswc\" (UID: \"5addafbe-0ed8-44c4-a510-eb260b9c8149\") " pod="openshift-route-controller-manager/route-controller-manager-5b8989f9dc-vnswc" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.890845 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5addafbe-0ed8-44c4-a510-eb260b9c8149-client-ca\") pod \"route-controller-manager-5b8989f9dc-vnswc\" (UID: \"5addafbe-0ed8-44c4-a510-eb260b9c8149\") " pod="openshift-route-controller-manager/route-controller-manager-5b8989f9dc-vnswc" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.891686 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5addafbe-0ed8-44c4-a510-eb260b9c8149-client-ca\") pod \"route-controller-manager-5b8989f9dc-vnswc\" (UID: \"5addafbe-0ed8-44c4-a510-eb260b9c8149\") " pod="openshift-route-controller-manager/route-controller-manager-5b8989f9dc-vnswc" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.892375 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5addafbe-0ed8-44c4-a510-eb260b9c8149-config\") pod \"route-controller-manager-5b8989f9dc-vnswc\" (UID: \"5addafbe-0ed8-44c4-a510-eb260b9c8149\") " pod="openshift-route-controller-manager/route-controller-manager-5b8989f9dc-vnswc" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.896515 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5addafbe-0ed8-44c4-a510-eb260b9c8149-serving-cert\") pod \"route-controller-manager-5b8989f9dc-vnswc\" (UID: \"5addafbe-0ed8-44c4-a510-eb260b9c8149\") " pod="openshift-route-controller-manager/route-controller-manager-5b8989f9dc-vnswc" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.918936 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gq4g\" (UniqueName: \"kubernetes.io/projected/5addafbe-0ed8-44c4-a510-eb260b9c8149-kube-api-access-5gq4g\") pod \"route-controller-manager-5b8989f9dc-vnswc\" (UID: \"5addafbe-0ed8-44c4-a510-eb260b9c8149\") " pod="openshift-route-controller-manager/route-controller-manager-5b8989f9dc-vnswc" Feb 02 17:18:30 crc kubenswrapper[4858]: I0202 17:18:30.987816 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b8989f9dc-vnswc" Feb 02 17:18:31 crc kubenswrapper[4858]: I0202 17:18:31.173682 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b8989f9dc-vnswc"] Feb 02 17:18:31 crc kubenswrapper[4858]: W0202 17:18:31.177680 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5addafbe_0ed8_44c4_a510_eb260b9c8149.slice/crio-88835589bd07e2fa3b4946fb55719800f9b12230ea230e5815037a9a188ef082 WatchSource:0}: Error finding container 88835589bd07e2fa3b4946fb55719800f9b12230ea230e5815037a9a188ef082: Status 404 returned error can't find the container with id 88835589bd07e2fa3b4946fb55719800f9b12230ea230e5815037a9a188ef082 Feb 02 17:18:31 crc kubenswrapper[4858]: I0202 17:18:31.374232 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5nj69" Feb 02 17:18:31 crc kubenswrapper[4858]: I0202 17:18:31.407843 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5nj69" Feb 02 17:18:32 crc kubenswrapper[4858]: I0202 17:18:32.131759 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b8989f9dc-vnswc" event={"ID":"5addafbe-0ed8-44c4-a510-eb260b9c8149","Type":"ContainerStarted","Data":"e737a754e0ad919a882f0789e2d9ddc4af8c017d01a48a4197171184450a77bb"} Feb 02 17:18:32 crc kubenswrapper[4858]: I0202 17:18:32.131865 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b8989f9dc-vnswc" event={"ID":"5addafbe-0ed8-44c4-a510-eb260b9c8149","Type":"ContainerStarted","Data":"88835589bd07e2fa3b4946fb55719800f9b12230ea230e5815037a9a188ef082"} Feb 02 17:18:32 crc kubenswrapper[4858]: I0202 17:18:32.132259 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5b8989f9dc-vnswc" Feb 02 17:18:32 crc kubenswrapper[4858]: I0202 17:18:32.154286 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5b8989f9dc-vnswc" podStartSLOduration=4.15426843 podStartE2EDuration="4.15426843s" podCreationTimestamp="2026-02-02 17:18:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:18:32.151792991 +0000 UTC m=+213.304208266" watchObservedRunningTime="2026-02-02 17:18:32.15426843 +0000 UTC m=+213.306683685" Feb 02 17:18:32 crc kubenswrapper[4858]: I0202 17:18:32.256167 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5b8989f9dc-vnswc" Feb 02 17:18:32 crc kubenswrapper[4858]: I0202 17:18:32.396246 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vqhch"] Feb 02 17:18:32 crc kubenswrapper[4858]: I0202 17:18:32.396538 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vqhch" podUID="2bd40f69-131b-4d0c-87d9-bfae63f9a4eb" containerName="registry-server" containerID="cri-o://8bfdf407314bd7b21d52c94dd7d2f4f4581db7ddfd43b727d43289e9bf141d58" gracePeriod=2 Feb 02 17:18:32 crc kubenswrapper[4858]: I0202 17:18:32.405668 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68b434d7-79a1-4449-ba3e-2615931cc0b4" path="/var/lib/kubelet/pods/68b434d7-79a1-4449-ba3e-2615931cc0b4/volumes" Feb 02 17:18:32 crc kubenswrapper[4858]: I0202 17:18:32.647866 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-578bd4d5b5-lm6g4"] Feb 02 17:18:32 crc kubenswrapper[4858]: I0202 17:18:32.648667 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-578bd4d5b5-lm6g4" Feb 02 17:18:32 crc kubenswrapper[4858]: I0202 17:18:32.656563 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 17:18:32 crc kubenswrapper[4858]: I0202 17:18:32.656854 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 17:18:32 crc kubenswrapper[4858]: I0202 17:18:32.657165 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 17:18:32 crc kubenswrapper[4858]: I0202 17:18:32.657471 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 17:18:32 crc kubenswrapper[4858]: I0202 17:18:32.658233 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 17:18:32 crc kubenswrapper[4858]: I0202 17:18:32.658641 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 17:18:32 crc kubenswrapper[4858]: I0202 17:18:32.672082 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 17:18:32 crc kubenswrapper[4858]: I0202 17:18:32.672141 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-578bd4d5b5-lm6g4"] Feb 02 17:18:32 crc kubenswrapper[4858]: I0202 17:18:32.769043 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vqhch" Feb 02 17:18:32 crc kubenswrapper[4858]: I0202 17:18:32.816094 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7-config\") pod \"controller-manager-578bd4d5b5-lm6g4\" (UID: \"7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7\") " pod="openshift-controller-manager/controller-manager-578bd4d5b5-lm6g4" Feb 02 17:18:32 crc kubenswrapper[4858]: I0202 17:18:32.816151 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrpmp\" (UniqueName: \"kubernetes.io/projected/7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7-kube-api-access-mrpmp\") pod \"controller-manager-578bd4d5b5-lm6g4\" (UID: \"7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7\") " pod="openshift-controller-manager/controller-manager-578bd4d5b5-lm6g4" Feb 02 17:18:32 crc kubenswrapper[4858]: I0202 17:18:32.816200 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7-serving-cert\") pod \"controller-manager-578bd4d5b5-lm6g4\" (UID: \"7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7\") " pod="openshift-controller-manager/controller-manager-578bd4d5b5-lm6g4" Feb 02 17:18:32 crc kubenswrapper[4858]: I0202 17:18:32.816223 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7-proxy-ca-bundles\") pod \"controller-manager-578bd4d5b5-lm6g4\" (UID: \"7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7\") " pod="openshift-controller-manager/controller-manager-578bd4d5b5-lm6g4" Feb 02 17:18:32 crc kubenswrapper[4858]: I0202 17:18:32.816238 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7-client-ca\") pod \"controller-manager-578bd4d5b5-lm6g4\" (UID: \"7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7\") " pod="openshift-controller-manager/controller-manager-578bd4d5b5-lm6g4" Feb 02 17:18:32 crc kubenswrapper[4858]: I0202 17:18:32.917177 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bd40f69-131b-4d0c-87d9-bfae63f9a4eb-catalog-content\") pod \"2bd40f69-131b-4d0c-87d9-bfae63f9a4eb\" (UID: \"2bd40f69-131b-4d0c-87d9-bfae63f9a4eb\") " Feb 02 17:18:32 crc kubenswrapper[4858]: I0202 17:18:32.917587 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bd40f69-131b-4d0c-87d9-bfae63f9a4eb-utilities\") pod \"2bd40f69-131b-4d0c-87d9-bfae63f9a4eb\" (UID: \"2bd40f69-131b-4d0c-87d9-bfae63f9a4eb\") " Feb 02 17:18:32 crc kubenswrapper[4858]: I0202 17:18:32.917777 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrhl2\" (UniqueName: \"kubernetes.io/projected/2bd40f69-131b-4d0c-87d9-bfae63f9a4eb-kube-api-access-wrhl2\") pod \"2bd40f69-131b-4d0c-87d9-bfae63f9a4eb\" (UID: \"2bd40f69-131b-4d0c-87d9-bfae63f9a4eb\") " Feb 02 17:18:32 crc kubenswrapper[4858]: I0202 17:18:32.918093 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7-config\") pod \"controller-manager-578bd4d5b5-lm6g4\" (UID: \"7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7\") " pod="openshift-controller-manager/controller-manager-578bd4d5b5-lm6g4" Feb 02 17:18:32 crc kubenswrapper[4858]: I0202 17:18:32.918322 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrpmp\" (UniqueName: \"kubernetes.io/projected/7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7-kube-api-access-mrpmp\") pod \"controller-manager-578bd4d5b5-lm6g4\" (UID: \"7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7\") " pod="openshift-controller-manager/controller-manager-578bd4d5b5-lm6g4" Feb 02 17:18:32 crc kubenswrapper[4858]: I0202 17:18:32.918552 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7-serving-cert\") pod \"controller-manager-578bd4d5b5-lm6g4\" (UID: \"7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7\") " pod="openshift-controller-manager/controller-manager-578bd4d5b5-lm6g4" Feb 02 17:18:32 crc kubenswrapper[4858]: I0202 17:18:32.918734 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7-proxy-ca-bundles\") pod \"controller-manager-578bd4d5b5-lm6g4\" (UID: \"7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7\") " pod="openshift-controller-manager/controller-manager-578bd4d5b5-lm6g4" Feb 02 17:18:32 crc kubenswrapper[4858]: I0202 17:18:32.918879 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7-client-ca\") pod \"controller-manager-578bd4d5b5-lm6g4\" (UID: \"7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7\") " pod="openshift-controller-manager/controller-manager-578bd4d5b5-lm6g4" Feb 02 17:18:32 crc kubenswrapper[4858]: I0202 17:18:32.919050 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bd40f69-131b-4d0c-87d9-bfae63f9a4eb-utilities" (OuterVolumeSpecName: "utilities") pod "2bd40f69-131b-4d0c-87d9-bfae63f9a4eb" (UID: "2bd40f69-131b-4d0c-87d9-bfae63f9a4eb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:18:32 crc kubenswrapper[4858]: I0202 17:18:32.919765 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7-client-ca\") pod \"controller-manager-578bd4d5b5-lm6g4\" (UID: \"7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7\") " pod="openshift-controller-manager/controller-manager-578bd4d5b5-lm6g4" Feb 02 17:18:32 crc kubenswrapper[4858]: I0202 17:18:32.919771 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7-proxy-ca-bundles\") pod \"controller-manager-578bd4d5b5-lm6g4\" (UID: \"7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7\") " pod="openshift-controller-manager/controller-manager-578bd4d5b5-lm6g4" Feb 02 17:18:32 crc kubenswrapper[4858]: I0202 17:18:32.925609 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7-config\") pod \"controller-manager-578bd4d5b5-lm6g4\" (UID: \"7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7\") " pod="openshift-controller-manager/controller-manager-578bd4d5b5-lm6g4" Feb 02 17:18:32 crc kubenswrapper[4858]: I0202 17:18:32.925913 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bd40f69-131b-4d0c-87d9-bfae63f9a4eb-kube-api-access-wrhl2" (OuterVolumeSpecName: "kube-api-access-wrhl2") pod "2bd40f69-131b-4d0c-87d9-bfae63f9a4eb" (UID: "2bd40f69-131b-4d0c-87d9-bfae63f9a4eb"). InnerVolumeSpecName "kube-api-access-wrhl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:18:32 crc kubenswrapper[4858]: I0202 17:18:32.926855 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7-serving-cert\") pod \"controller-manager-578bd4d5b5-lm6g4\" (UID: \"7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7\") " pod="openshift-controller-manager/controller-manager-578bd4d5b5-lm6g4" Feb 02 17:18:32 crc kubenswrapper[4858]: I0202 17:18:32.941900 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrpmp\" (UniqueName: \"kubernetes.io/projected/7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7-kube-api-access-mrpmp\") pod \"controller-manager-578bd4d5b5-lm6g4\" (UID: \"7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7\") " pod="openshift-controller-manager/controller-manager-578bd4d5b5-lm6g4" Feb 02 17:18:32 crc kubenswrapper[4858]: I0202 17:18:32.964012 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bd40f69-131b-4d0c-87d9-bfae63f9a4eb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2bd40f69-131b-4d0c-87d9-bfae63f9a4eb" (UID: "2bd40f69-131b-4d0c-87d9-bfae63f9a4eb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:18:32 crc kubenswrapper[4858]: I0202 17:18:32.994249 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-578bd4d5b5-lm6g4" Feb 02 17:18:33 crc kubenswrapper[4858]: I0202 17:18:33.025766 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrhl2\" (UniqueName: \"kubernetes.io/projected/2bd40f69-131b-4d0c-87d9-bfae63f9a4eb-kube-api-access-wrhl2\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:33 crc kubenswrapper[4858]: I0202 17:18:33.025814 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bd40f69-131b-4d0c-87d9-bfae63f9a4eb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:33 crc kubenswrapper[4858]: I0202 17:18:33.025826 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bd40f69-131b-4d0c-87d9-bfae63f9a4eb-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:33 crc kubenswrapper[4858]: I0202 17:18:33.139625 4858 generic.go:334] "Generic (PLEG): container finished" podID="2bd40f69-131b-4d0c-87d9-bfae63f9a4eb" containerID="8bfdf407314bd7b21d52c94dd7d2f4f4581db7ddfd43b727d43289e9bf141d58" exitCode=0 Feb 02 17:18:33 crc kubenswrapper[4858]: I0202 17:18:33.139677 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vqhch" event={"ID":"2bd40f69-131b-4d0c-87d9-bfae63f9a4eb","Type":"ContainerDied","Data":"8bfdf407314bd7b21d52c94dd7d2f4f4581db7ddfd43b727d43289e9bf141d58"} Feb 02 17:18:33 crc kubenswrapper[4858]: I0202 17:18:33.139749 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vqhch" event={"ID":"2bd40f69-131b-4d0c-87d9-bfae63f9a4eb","Type":"ContainerDied","Data":"647ea2a44767336b9b7204659db105e169f82d3a59e3b9b109f979981e0ecbf2"} Feb 02 17:18:33 crc kubenswrapper[4858]: I0202 17:18:33.139762 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vqhch" Feb 02 17:18:33 crc kubenswrapper[4858]: I0202 17:18:33.139774 4858 scope.go:117] "RemoveContainer" containerID="8bfdf407314bd7b21d52c94dd7d2f4f4581db7ddfd43b727d43289e9bf141d58" Feb 02 17:18:33 crc kubenswrapper[4858]: I0202 17:18:33.156180 4858 scope.go:117] "RemoveContainer" containerID="2f8cdc2381fc948cafc4792a7d58234c999e9b3df46c6aba00258b581eae955a" Feb 02 17:18:33 crc kubenswrapper[4858]: I0202 17:18:33.171806 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vqhch"] Feb 02 17:18:33 crc kubenswrapper[4858]: I0202 17:18:33.188441 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vqhch"] Feb 02 17:18:33 crc kubenswrapper[4858]: I0202 17:18:33.208088 4858 scope.go:117] "RemoveContainer" containerID="fa92b68e5c3362010f6119fa7a2db7c9c0367c2d9d59a82abec419adbb25d62e" Feb 02 17:18:33 crc kubenswrapper[4858]: I0202 17:18:33.226405 4858 scope.go:117] "RemoveContainer" containerID="8bfdf407314bd7b21d52c94dd7d2f4f4581db7ddfd43b727d43289e9bf141d58" Feb 02 17:18:33 crc kubenswrapper[4858]: E0202 17:18:33.226990 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bfdf407314bd7b21d52c94dd7d2f4f4581db7ddfd43b727d43289e9bf141d58\": container with ID starting with 8bfdf407314bd7b21d52c94dd7d2f4f4581db7ddfd43b727d43289e9bf141d58 not found: ID does not exist" containerID="8bfdf407314bd7b21d52c94dd7d2f4f4581db7ddfd43b727d43289e9bf141d58" Feb 02 17:18:33 crc kubenswrapper[4858]: I0202 17:18:33.227039 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bfdf407314bd7b21d52c94dd7d2f4f4581db7ddfd43b727d43289e9bf141d58"} err="failed to get container status \"8bfdf407314bd7b21d52c94dd7d2f4f4581db7ddfd43b727d43289e9bf141d58\": rpc error: code = NotFound desc = could not find container \"8bfdf407314bd7b21d52c94dd7d2f4f4581db7ddfd43b727d43289e9bf141d58\": container with ID starting with 8bfdf407314bd7b21d52c94dd7d2f4f4581db7ddfd43b727d43289e9bf141d58 not found: ID does not exist" Feb 02 17:18:33 crc kubenswrapper[4858]: I0202 17:18:33.227067 4858 scope.go:117] "RemoveContainer" containerID="2f8cdc2381fc948cafc4792a7d58234c999e9b3df46c6aba00258b581eae955a" Feb 02 17:18:33 crc kubenswrapper[4858]: E0202 17:18:33.227420 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f8cdc2381fc948cafc4792a7d58234c999e9b3df46c6aba00258b581eae955a\": container with ID starting with 2f8cdc2381fc948cafc4792a7d58234c999e9b3df46c6aba00258b581eae955a not found: ID does not exist" containerID="2f8cdc2381fc948cafc4792a7d58234c999e9b3df46c6aba00258b581eae955a" Feb 02 17:18:33 crc kubenswrapper[4858]: I0202 17:18:33.227449 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f8cdc2381fc948cafc4792a7d58234c999e9b3df46c6aba00258b581eae955a"} err="failed to get container status \"2f8cdc2381fc948cafc4792a7d58234c999e9b3df46c6aba00258b581eae955a\": rpc error: code = NotFound desc = could not find container \"2f8cdc2381fc948cafc4792a7d58234c999e9b3df46c6aba00258b581eae955a\": container with ID starting with 2f8cdc2381fc948cafc4792a7d58234c999e9b3df46c6aba00258b581eae955a not found: ID does not exist" Feb 02 17:18:33 crc kubenswrapper[4858]: I0202 17:18:33.227467 4858 scope.go:117] "RemoveContainer" containerID="fa92b68e5c3362010f6119fa7a2db7c9c0367c2d9d59a82abec419adbb25d62e" Feb 02 17:18:33 crc kubenswrapper[4858]: E0202 17:18:33.228091 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa92b68e5c3362010f6119fa7a2db7c9c0367c2d9d59a82abec419adbb25d62e\": container with ID starting with fa92b68e5c3362010f6119fa7a2db7c9c0367c2d9d59a82abec419adbb25d62e not found: ID does not exist" containerID="fa92b68e5c3362010f6119fa7a2db7c9c0367c2d9d59a82abec419adbb25d62e" Feb 02 17:18:33 crc kubenswrapper[4858]: I0202 17:18:33.228140 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa92b68e5c3362010f6119fa7a2db7c9c0367c2d9d59a82abec419adbb25d62e"} err="failed to get container status \"fa92b68e5c3362010f6119fa7a2db7c9c0367c2d9d59a82abec419adbb25d62e\": rpc error: code = NotFound desc = could not find container \"fa92b68e5c3362010f6119fa7a2db7c9c0367c2d9d59a82abec419adbb25d62e\": container with ID starting with fa92b68e5c3362010f6119fa7a2db7c9c0367c2d9d59a82abec419adbb25d62e not found: ID does not exist" Feb 02 17:18:33 crc kubenswrapper[4858]: I0202 17:18:33.267583 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-578bd4d5b5-lm6g4"] Feb 02 17:18:33 crc kubenswrapper[4858]: W0202 17:18:33.273019 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7740f993_ea3a_46ac_8d1d_b81bc3d5b4a7.slice/crio-370b8cf0c089f1b61176db1c18ff73ef768722bc4f5fe4e0b76f321a8ec4ecc5 WatchSource:0}: Error finding container 370b8cf0c089f1b61176db1c18ff73ef768722bc4f5fe4e0b76f321a8ec4ecc5: Status 404 returned error can't find the container with id 370b8cf0c089f1b61176db1c18ff73ef768722bc4f5fe4e0b76f321a8ec4ecc5 Feb 02 17:18:34 crc kubenswrapper[4858]: I0202 17:18:34.147904 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-578bd4d5b5-lm6g4" event={"ID":"7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7","Type":"ContainerStarted","Data":"0b7ad0ae84e3b21619cc8e4c6511d9840f3b01ace9dc212a150766cf701fcd78"} Feb 02 17:18:34 crc kubenswrapper[4858]: I0202 17:18:34.148258 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-578bd4d5b5-lm6g4" event={"ID":"7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7","Type":"ContainerStarted","Data":"370b8cf0c089f1b61176db1c18ff73ef768722bc4f5fe4e0b76f321a8ec4ecc5"} Feb 02 17:18:34 crc kubenswrapper[4858]: I0202 17:18:34.164678 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-578bd4d5b5-lm6g4" podStartSLOduration=6.164656567 podStartE2EDuration="6.164656567s" podCreationTimestamp="2026-02-02 17:18:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:18:34.162793677 +0000 UTC m=+215.315208942" watchObservedRunningTime="2026-02-02 17:18:34.164656567 +0000 UTC m=+215.317071832" Feb 02 17:18:34 crc kubenswrapper[4858]: I0202 17:18:34.413741 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bd40f69-131b-4d0c-87d9-bfae63f9a4eb" path="/var/lib/kubelet/pods/2bd40f69-131b-4d0c-87d9-bfae63f9a4eb/volumes" Feb 02 17:18:34 crc kubenswrapper[4858]: I0202 17:18:34.597753 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5nj69"] Feb 02 17:18:34 crc kubenswrapper[4858]: I0202 17:18:34.597984 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5nj69" podUID="9015cfbb-4091-4598-b5fd-007d2372a89e" containerName="registry-server" containerID="cri-o://62cdfb76ed077010d6dcf3ee7b6ee0223d661ccdf07bf1fcea32a14841fb23c2" gracePeriod=2 Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.078597 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" podUID="5ce76d15-6d25-4fe4-88e8-bde4a27c5a73" containerName="oauth-openshift" containerID="cri-o://81071d6892480c064dc70d15445461c76a54bc11e76351541c846445fde3ea9c" gracePeriod=15 Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.158465 4858 generic.go:334] "Generic (PLEG): container finished" podID="9015cfbb-4091-4598-b5fd-007d2372a89e" containerID="62cdfb76ed077010d6dcf3ee7b6ee0223d661ccdf07bf1fcea32a14841fb23c2" exitCode=0 Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.158562 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nj69" event={"ID":"9015cfbb-4091-4598-b5fd-007d2372a89e","Type":"ContainerDied","Data":"62cdfb76ed077010d6dcf3ee7b6ee0223d661ccdf07bf1fcea32a14841fb23c2"} Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.158942 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-578bd4d5b5-lm6g4" Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.168569 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-578bd4d5b5-lm6g4" Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.732985 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5nj69" Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.760646 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbpdw\" (UniqueName: \"kubernetes.io/projected/9015cfbb-4091-4598-b5fd-007d2372a89e-kube-api-access-sbpdw\") pod \"9015cfbb-4091-4598-b5fd-007d2372a89e\" (UID: \"9015cfbb-4091-4598-b5fd-007d2372a89e\") " Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.760709 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9015cfbb-4091-4598-b5fd-007d2372a89e-utilities\") pod \"9015cfbb-4091-4598-b5fd-007d2372a89e\" (UID: \"9015cfbb-4091-4598-b5fd-007d2372a89e\") " Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.760740 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9015cfbb-4091-4598-b5fd-007d2372a89e-catalog-content\") pod \"9015cfbb-4091-4598-b5fd-007d2372a89e\" (UID: \"9015cfbb-4091-4598-b5fd-007d2372a89e\") " Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.762438 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9015cfbb-4091-4598-b5fd-007d2372a89e-utilities" (OuterVolumeSpecName: "utilities") pod "9015cfbb-4091-4598-b5fd-007d2372a89e" (UID: "9015cfbb-4091-4598-b5fd-007d2372a89e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.766621 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9015cfbb-4091-4598-b5fd-007d2372a89e-kube-api-access-sbpdw" (OuterVolumeSpecName: "kube-api-access-sbpdw") pod "9015cfbb-4091-4598-b5fd-007d2372a89e" (UID: "9015cfbb-4091-4598-b5fd-007d2372a89e"). InnerVolumeSpecName "kube-api-access-sbpdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.799394 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.861717 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-audit-policies\") pod \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.861793 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-user-template-login\") pod \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.861838 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-user-template-error\") pod \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.861864 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-system-router-certs\") pod \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.861895 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-system-cliconfig\") pod \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.861936 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-user-template-provider-selection\") pod \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.861963 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf86t\" (UniqueName: \"kubernetes.io/projected/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-kube-api-access-gf86t\") pod \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.862078 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-system-serving-cert\") pod \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.863152 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-system-trusted-ca-bundle\") pod \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.863199 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-system-ocp-branding-template\") pod \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.863243 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-system-session\") pod \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.863274 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-audit-dir\") pod \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.863303 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-user-idp-0-file-data\") pod \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.863334 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-system-service-ca\") pod \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\" (UID: \"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73\") " Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.863724 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbpdw\" (UniqueName: \"kubernetes.io/projected/9015cfbb-4091-4598-b5fd-007d2372a89e-kube-api-access-sbpdw\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.863743 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9015cfbb-4091-4598-b5fd-007d2372a89e-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.864008 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "5ce76d15-6d25-4fe4-88e8-bde4a27c5a73" (UID: "5ce76d15-6d25-4fe4-88e8-bde4a27c5a73"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.864042 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "5ce76d15-6d25-4fe4-88e8-bde4a27c5a73" (UID: "5ce76d15-6d25-4fe4-88e8-bde4a27c5a73"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.864252 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "5ce76d15-6d25-4fe4-88e8-bde4a27c5a73" (UID: "5ce76d15-6d25-4fe4-88e8-bde4a27c5a73"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.864662 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "5ce76d15-6d25-4fe4-88e8-bde4a27c5a73" (UID: "5ce76d15-6d25-4fe4-88e8-bde4a27c5a73"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.865664 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-kube-api-access-gf86t" (OuterVolumeSpecName: "kube-api-access-gf86t") pod "5ce76d15-6d25-4fe4-88e8-bde4a27c5a73" (UID: "5ce76d15-6d25-4fe4-88e8-bde4a27c5a73"). InnerVolumeSpecName "kube-api-access-gf86t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.865669 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "5ce76d15-6d25-4fe4-88e8-bde4a27c5a73" (UID: "5ce76d15-6d25-4fe4-88e8-bde4a27c5a73"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.866484 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "5ce76d15-6d25-4fe4-88e8-bde4a27c5a73" (UID: "5ce76d15-6d25-4fe4-88e8-bde4a27c5a73"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.866690 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "5ce76d15-6d25-4fe4-88e8-bde4a27c5a73" (UID: "5ce76d15-6d25-4fe4-88e8-bde4a27c5a73"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.866780 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "5ce76d15-6d25-4fe4-88e8-bde4a27c5a73" (UID: "5ce76d15-6d25-4fe4-88e8-bde4a27c5a73"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.868460 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "5ce76d15-6d25-4fe4-88e8-bde4a27c5a73" (UID: "5ce76d15-6d25-4fe4-88e8-bde4a27c5a73"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.868662 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "5ce76d15-6d25-4fe4-88e8-bde4a27c5a73" (UID: "5ce76d15-6d25-4fe4-88e8-bde4a27c5a73"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.869511 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "5ce76d15-6d25-4fe4-88e8-bde4a27c5a73" (UID: "5ce76d15-6d25-4fe4-88e8-bde4a27c5a73"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.873653 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "5ce76d15-6d25-4fe4-88e8-bde4a27c5a73" (UID: "5ce76d15-6d25-4fe4-88e8-bde4a27c5a73"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.886043 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "5ce76d15-6d25-4fe4-88e8-bde4a27c5a73" (UID: "5ce76d15-6d25-4fe4-88e8-bde4a27c5a73"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.887810 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9015cfbb-4091-4598-b5fd-007d2372a89e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9015cfbb-4091-4598-b5fd-007d2372a89e" (UID: "9015cfbb-4091-4598-b5fd-007d2372a89e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.965948 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.966016 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.966031 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.966045 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.966059 4858 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.966072 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.966085 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.966098 4858 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.966111 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.966125 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9015cfbb-4091-4598-b5fd-007d2372a89e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.966138 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.966150 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.966163 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.966178 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf86t\" (UniqueName: \"kubernetes.io/projected/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-kube-api-access-gf86t\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:35 crc kubenswrapper[4858]: I0202 17:18:35.966190 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:36 crc kubenswrapper[4858]: I0202 17:18:36.167801 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nj69" event={"ID":"9015cfbb-4091-4598-b5fd-007d2372a89e","Type":"ContainerDied","Data":"fd89ad2ac9e27dcd4e335970b747f3016bdd1b7a6fa8a74899c0062479dca989"} Feb 02 17:18:36 crc kubenswrapper[4858]: I0202 17:18:36.167869 4858 scope.go:117] "RemoveContainer" containerID="62cdfb76ed077010d6dcf3ee7b6ee0223d661ccdf07bf1fcea32a14841fb23c2" Feb 02 17:18:36 crc kubenswrapper[4858]: I0202 17:18:36.167852 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5nj69" Feb 02 17:18:36 crc kubenswrapper[4858]: I0202 17:18:36.171019 4858 generic.go:334] "Generic (PLEG): container finished" podID="5ce76d15-6d25-4fe4-88e8-bde4a27c5a73" containerID="81071d6892480c064dc70d15445461c76a54bc11e76351541c846445fde3ea9c" exitCode=0 Feb 02 17:18:36 crc kubenswrapper[4858]: I0202 17:18:36.171811 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" Feb 02 17:18:36 crc kubenswrapper[4858]: I0202 17:18:36.176428 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" event={"ID":"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73","Type":"ContainerDied","Data":"81071d6892480c064dc70d15445461c76a54bc11e76351541c846445fde3ea9c"} Feb 02 17:18:36 crc kubenswrapper[4858]: I0202 17:18:36.176533 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-9n5ph" event={"ID":"5ce76d15-6d25-4fe4-88e8-bde4a27c5a73","Type":"ContainerDied","Data":"b168df4970b8b3cf896486a9f4e4574cfc47d0f873774be06aa8b8fb1994cdf2"} Feb 02 17:18:36 crc kubenswrapper[4858]: I0202 17:18:36.188034 4858 scope.go:117] "RemoveContainer" containerID="e3f0226f133a7ad5c5f53618f816b8174a98cec6f63973ae077d02290f197c06" Feb 02 17:18:36 crc kubenswrapper[4858]: I0202 17:18:36.216960 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5nj69"] Feb 02 17:18:36 crc kubenswrapper[4858]: I0202 17:18:36.223061 4858 scope.go:117] "RemoveContainer" containerID="23512c88d51369a11d0099a8ad9d66f399282c3cf0509161b2e04b8333a4ecf8" Feb 02 17:18:36 crc kubenswrapper[4858]: I0202 17:18:36.229681 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5nj69"] Feb 02 17:18:36 crc kubenswrapper[4858]: I0202 17:18:36.238591 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9n5ph"] Feb 02 17:18:36 crc kubenswrapper[4858]: I0202 17:18:36.242554 4858 scope.go:117] "RemoveContainer" containerID="81071d6892480c064dc70d15445461c76a54bc11e76351541c846445fde3ea9c" Feb 02 17:18:36 crc kubenswrapper[4858]: I0202 17:18:36.243013 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9n5ph"] Feb 02 17:18:36 crc kubenswrapper[4858]: I0202 17:18:36.266841 4858 scope.go:117] "RemoveContainer" containerID="81071d6892480c064dc70d15445461c76a54bc11e76351541c846445fde3ea9c" Feb 02 17:18:36 crc kubenswrapper[4858]: E0202 17:18:36.267741 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81071d6892480c064dc70d15445461c76a54bc11e76351541c846445fde3ea9c\": container with ID starting with 81071d6892480c064dc70d15445461c76a54bc11e76351541c846445fde3ea9c not found: ID does not exist" containerID="81071d6892480c064dc70d15445461c76a54bc11e76351541c846445fde3ea9c" Feb 02 17:18:36 crc kubenswrapper[4858]: I0202 17:18:36.267851 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81071d6892480c064dc70d15445461c76a54bc11e76351541c846445fde3ea9c"} err="failed to get container status \"81071d6892480c064dc70d15445461c76a54bc11e76351541c846445fde3ea9c\": rpc error: code = NotFound desc = could not find container \"81071d6892480c064dc70d15445461c76a54bc11e76351541c846445fde3ea9c\": container with ID starting with 81071d6892480c064dc70d15445461c76a54bc11e76351541c846445fde3ea9c not found: ID does not exist" Feb 02 17:18:36 crc kubenswrapper[4858]: I0202 17:18:36.409572 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ce76d15-6d25-4fe4-88e8-bde4a27c5a73" path="/var/lib/kubelet/pods/5ce76d15-6d25-4fe4-88e8-bde4a27c5a73/volumes" Feb 02 17:18:36 crc kubenswrapper[4858]: I0202 17:18:36.410427 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9015cfbb-4091-4598-b5fd-007d2372a89e" path="/var/lib/kubelet/pods/9015cfbb-4091-4598-b5fd-007d2372a89e/volumes" Feb 02 17:18:37 crc kubenswrapper[4858]: I0202 17:18:37.949946 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7x482" Feb 02 17:18:37 crc kubenswrapper[4858]: I0202 17:18:37.951997 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7x482" Feb 02 17:18:38 crc kubenswrapper[4858]: I0202 17:18:38.004492 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7x482" Feb 02 17:18:38 crc kubenswrapper[4858]: I0202 17:18:38.255253 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7x482" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.653746 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-75894779c6-zk5dr"] Feb 02 17:18:44 crc kubenswrapper[4858]: E0202 17:18:44.654210 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bd40f69-131b-4d0c-87d9-bfae63f9a4eb" containerName="extract-utilities" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.654225 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bd40f69-131b-4d0c-87d9-bfae63f9a4eb" containerName="extract-utilities" Feb 02 17:18:44 crc kubenswrapper[4858]: E0202 17:18:44.654238 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bd40f69-131b-4d0c-87d9-bfae63f9a4eb" containerName="extract-content" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.654246 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bd40f69-131b-4d0c-87d9-bfae63f9a4eb" containerName="extract-content" Feb 02 17:18:44 crc kubenswrapper[4858]: E0202 17:18:44.654256 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9015cfbb-4091-4598-b5fd-007d2372a89e" containerName="registry-server" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.654264 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9015cfbb-4091-4598-b5fd-007d2372a89e" containerName="registry-server" Feb 02 17:18:44 crc kubenswrapper[4858]: E0202 17:18:44.654281 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ce76d15-6d25-4fe4-88e8-bde4a27c5a73" containerName="oauth-openshift" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.654288 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ce76d15-6d25-4fe4-88e8-bde4a27c5a73" containerName="oauth-openshift" Feb 02 17:18:44 crc kubenswrapper[4858]: E0202 17:18:44.654299 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9015cfbb-4091-4598-b5fd-007d2372a89e" containerName="extract-content" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.654307 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9015cfbb-4091-4598-b5fd-007d2372a89e" containerName="extract-content" Feb 02 17:18:44 crc kubenswrapper[4858]: E0202 17:18:44.654321 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9015cfbb-4091-4598-b5fd-007d2372a89e" containerName="extract-utilities" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.654328 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9015cfbb-4091-4598-b5fd-007d2372a89e" containerName="extract-utilities" Feb 02 17:18:44 crc kubenswrapper[4858]: E0202 17:18:44.654340 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bd40f69-131b-4d0c-87d9-bfae63f9a4eb" containerName="registry-server" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.654347 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bd40f69-131b-4d0c-87d9-bfae63f9a4eb" containerName="registry-server" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.654452 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ce76d15-6d25-4fe4-88e8-bde4a27c5a73" containerName="oauth-openshift" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.654466 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bd40f69-131b-4d0c-87d9-bfae63f9a4eb" containerName="registry-server" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.654479 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9015cfbb-4091-4598-b5fd-007d2372a89e" containerName="registry-server" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.654896 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.658739 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.659013 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.659066 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.659181 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.659439 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.659715 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.659778 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.662274 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.662455 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.662556 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.663771 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.665551 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.667836 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.672460 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.677933 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.680814 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-75894779c6-zk5dr"] Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.682729 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53a274b3-263f-4fa7-bbd4-a5c84ba88fcb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-75894779c6-zk5dr\" (UID: \"53a274b3-263f-4fa7-bbd4-a5c84ba88fcb\") " pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.682852 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/53a274b3-263f-4fa7-bbd4-a5c84ba88fcb-v4-0-config-user-template-login\") pod \"oauth-openshift-75894779c6-zk5dr\" (UID: \"53a274b3-263f-4fa7-bbd4-a5c84ba88fcb\") " pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.682916 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/53a274b3-263f-4fa7-bbd4-a5c84ba88fcb-audit-policies\") pod \"oauth-openshift-75894779c6-zk5dr\" (UID: \"53a274b3-263f-4fa7-bbd4-a5c84ba88fcb\") " pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.683055 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/53a274b3-263f-4fa7-bbd4-a5c84ba88fcb-v4-0-config-system-service-ca\") pod \"oauth-openshift-75894779c6-zk5dr\" (UID: \"53a274b3-263f-4fa7-bbd4-a5c84ba88fcb\") " pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.683137 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/53a274b3-263f-4fa7-bbd4-a5c84ba88fcb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-75894779c6-zk5dr\" (UID: \"53a274b3-263f-4fa7-bbd4-a5c84ba88fcb\") " pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.683195 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/53a274b3-263f-4fa7-bbd4-a5c84ba88fcb-v4-0-config-system-router-certs\") pod \"oauth-openshift-75894779c6-zk5dr\" (UID: \"53a274b3-263f-4fa7-bbd4-a5c84ba88fcb\") " pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.683266 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/53a274b3-263f-4fa7-bbd4-a5c84ba88fcb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-75894779c6-zk5dr\" (UID: \"53a274b3-263f-4fa7-bbd4-a5c84ba88fcb\") " pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.683317 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vxk5\" (UniqueName: \"kubernetes.io/projected/53a274b3-263f-4fa7-bbd4-a5c84ba88fcb-kube-api-access-7vxk5\") pod \"oauth-openshift-75894779c6-zk5dr\" (UID: \"53a274b3-263f-4fa7-bbd4-a5c84ba88fcb\") " pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.683397 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/53a274b3-263f-4fa7-bbd4-a5c84ba88fcb-v4-0-config-user-template-error\") pod \"oauth-openshift-75894779c6-zk5dr\" (UID: \"53a274b3-263f-4fa7-bbd4-a5c84ba88fcb\") " pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.683471 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/53a274b3-263f-4fa7-bbd4-a5c84ba88fcb-v4-0-config-system-session\") pod \"oauth-openshift-75894779c6-zk5dr\" (UID: \"53a274b3-263f-4fa7-bbd4-a5c84ba88fcb\") " pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.683536 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/53a274b3-263f-4fa7-bbd4-a5c84ba88fcb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-75894779c6-zk5dr\" (UID: \"53a274b3-263f-4fa7-bbd4-a5c84ba88fcb\") " pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.683614 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/53a274b3-263f-4fa7-bbd4-a5c84ba88fcb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-75894779c6-zk5dr\" (UID: \"53a274b3-263f-4fa7-bbd4-a5c84ba88fcb\") " pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.683698 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/53a274b3-263f-4fa7-bbd4-a5c84ba88fcb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-75894779c6-zk5dr\" (UID: \"53a274b3-263f-4fa7-bbd4-a5c84ba88fcb\") " pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.683765 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/53a274b3-263f-4fa7-bbd4-a5c84ba88fcb-audit-dir\") pod \"oauth-openshift-75894779c6-zk5dr\" (UID: \"53a274b3-263f-4fa7-bbd4-a5c84ba88fcb\") " pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.784709 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/53a274b3-263f-4fa7-bbd4-a5c84ba88fcb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-75894779c6-zk5dr\" (UID: \"53a274b3-263f-4fa7-bbd4-a5c84ba88fcb\") " pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.784764 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vxk5\" (UniqueName: \"kubernetes.io/projected/53a274b3-263f-4fa7-bbd4-a5c84ba88fcb-kube-api-access-7vxk5\") pod \"oauth-openshift-75894779c6-zk5dr\" (UID: \"53a274b3-263f-4fa7-bbd4-a5c84ba88fcb\") " pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.784805 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/53a274b3-263f-4fa7-bbd4-a5c84ba88fcb-v4-0-config-user-template-error\") pod \"oauth-openshift-75894779c6-zk5dr\" (UID: \"53a274b3-263f-4fa7-bbd4-a5c84ba88fcb\") " pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.784834 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/53a274b3-263f-4fa7-bbd4-a5c84ba88fcb-v4-0-config-system-session\") pod \"oauth-openshift-75894779c6-zk5dr\" (UID: \"53a274b3-263f-4fa7-bbd4-a5c84ba88fcb\") " pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.784861 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/53a274b3-263f-4fa7-bbd4-a5c84ba88fcb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-75894779c6-zk5dr\" (UID: \"53a274b3-263f-4fa7-bbd4-a5c84ba88fcb\") " pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.784904 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/53a274b3-263f-4fa7-bbd4-a5c84ba88fcb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-75894779c6-zk5dr\" (UID: \"53a274b3-263f-4fa7-bbd4-a5c84ba88fcb\") " pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.784924 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/53a274b3-263f-4fa7-bbd4-a5c84ba88fcb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-75894779c6-zk5dr\" (UID: \"53a274b3-263f-4fa7-bbd4-a5c84ba88fcb\") " pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.784948 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/53a274b3-263f-4fa7-bbd4-a5c84ba88fcb-audit-dir\") pod \"oauth-openshift-75894779c6-zk5dr\" (UID: \"53a274b3-263f-4fa7-bbd4-a5c84ba88fcb\") " pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.784996 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53a274b3-263f-4fa7-bbd4-a5c84ba88fcb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-75894779c6-zk5dr\" (UID: \"53a274b3-263f-4fa7-bbd4-a5c84ba88fcb\") " pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.785030 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/53a274b3-263f-4fa7-bbd4-a5c84ba88fcb-v4-0-config-user-template-login\") pod \"oauth-openshift-75894779c6-zk5dr\" (UID: \"53a274b3-263f-4fa7-bbd4-a5c84ba88fcb\") " pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.785057 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/53a274b3-263f-4fa7-bbd4-a5c84ba88fcb-audit-policies\") pod \"oauth-openshift-75894779c6-zk5dr\" (UID: \"53a274b3-263f-4fa7-bbd4-a5c84ba88fcb\") " pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.785089 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/53a274b3-263f-4fa7-bbd4-a5c84ba88fcb-v4-0-config-system-service-ca\") pod \"oauth-openshift-75894779c6-zk5dr\" (UID: \"53a274b3-263f-4fa7-bbd4-a5c84ba88fcb\") " pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.785117 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/53a274b3-263f-4fa7-bbd4-a5c84ba88fcb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-75894779c6-zk5dr\" (UID: \"53a274b3-263f-4fa7-bbd4-a5c84ba88fcb\") " pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.785144 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/53a274b3-263f-4fa7-bbd4-a5c84ba88fcb-v4-0-config-system-router-certs\") pod \"oauth-openshift-75894779c6-zk5dr\" (UID: \"53a274b3-263f-4fa7-bbd4-a5c84ba88fcb\") " pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.786246 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/53a274b3-263f-4fa7-bbd4-a5c84ba88fcb-v4-0-config-system-cliconfig\") pod \"oauth-openshift-75894779c6-zk5dr\" (UID: \"53a274b3-263f-4fa7-bbd4-a5c84ba88fcb\") " pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.786747 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/53a274b3-263f-4fa7-bbd4-a5c84ba88fcb-audit-dir\") pod \"oauth-openshift-75894779c6-zk5dr\" (UID: \"53a274b3-263f-4fa7-bbd4-a5c84ba88fcb\") " pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.787487 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/53a274b3-263f-4fa7-bbd4-a5c84ba88fcb-v4-0-config-system-service-ca\") pod \"oauth-openshift-75894779c6-zk5dr\" (UID: \"53a274b3-263f-4fa7-bbd4-a5c84ba88fcb\") " pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.788269 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/53a274b3-263f-4fa7-bbd4-a5c84ba88fcb-audit-policies\") pod \"oauth-openshift-75894779c6-zk5dr\" (UID: \"53a274b3-263f-4fa7-bbd4-a5c84ba88fcb\") " pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.788373 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53a274b3-263f-4fa7-bbd4-a5c84ba88fcb-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-75894779c6-zk5dr\" (UID: \"53a274b3-263f-4fa7-bbd4-a5c84ba88fcb\") " pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.791506 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/53a274b3-263f-4fa7-bbd4-a5c84ba88fcb-v4-0-config-user-template-login\") pod \"oauth-openshift-75894779c6-zk5dr\" (UID: \"53a274b3-263f-4fa7-bbd4-a5c84ba88fcb\") " pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.791522 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/53a274b3-263f-4fa7-bbd4-a5c84ba88fcb-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-75894779c6-zk5dr\" (UID: \"53a274b3-263f-4fa7-bbd4-a5c84ba88fcb\") " pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.791591 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/53a274b3-263f-4fa7-bbd4-a5c84ba88fcb-v4-0-config-system-router-certs\") pod \"oauth-openshift-75894779c6-zk5dr\" (UID: \"53a274b3-263f-4fa7-bbd4-a5c84ba88fcb\") " pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.791809 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/53a274b3-263f-4fa7-bbd4-a5c84ba88fcb-v4-0-config-system-serving-cert\") pod \"oauth-openshift-75894779c6-zk5dr\" (UID: \"53a274b3-263f-4fa7-bbd4-a5c84ba88fcb\") " pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.793077 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/53a274b3-263f-4fa7-bbd4-a5c84ba88fcb-v4-0-config-user-template-error\") pod \"oauth-openshift-75894779c6-zk5dr\" (UID: \"53a274b3-263f-4fa7-bbd4-a5c84ba88fcb\") " pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.793587 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/53a274b3-263f-4fa7-bbd4-a5c84ba88fcb-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-75894779c6-zk5dr\" (UID: \"53a274b3-263f-4fa7-bbd4-a5c84ba88fcb\") " pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.794498 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/53a274b3-263f-4fa7-bbd4-a5c84ba88fcb-v4-0-config-system-session\") pod \"oauth-openshift-75894779c6-zk5dr\" (UID: \"53a274b3-263f-4fa7-bbd4-a5c84ba88fcb\") " pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.797660 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/53a274b3-263f-4fa7-bbd4-a5c84ba88fcb-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-75894779c6-zk5dr\" (UID: \"53a274b3-263f-4fa7-bbd4-a5c84ba88fcb\") " pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.809570 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vxk5\" (UniqueName: \"kubernetes.io/projected/53a274b3-263f-4fa7-bbd4-a5c84ba88fcb-kube-api-access-7vxk5\") pod \"oauth-openshift-75894779c6-zk5dr\" (UID: \"53a274b3-263f-4fa7-bbd4-a5c84ba88fcb\") " pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:44 crc kubenswrapper[4858]: I0202 17:18:44.986328 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:45 crc kubenswrapper[4858]: I0202 17:18:45.445687 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-75894779c6-zk5dr"] Feb 02 17:18:45 crc kubenswrapper[4858]: W0202 17:18:45.465099 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53a274b3_263f_4fa7_bbd4_a5c84ba88fcb.slice/crio-c2cfb51136b5bf4797deb5e124c5736fa53a27308220402a47c2efeca7e5ba99 WatchSource:0}: Error finding container c2cfb51136b5bf4797deb5e124c5736fa53a27308220402a47c2efeca7e5ba99: Status 404 returned error can't find the container with id c2cfb51136b5bf4797deb5e124c5736fa53a27308220402a47c2efeca7e5ba99 Feb 02 17:18:46 crc kubenswrapper[4858]: I0202 17:18:46.247799 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" event={"ID":"53a274b3-263f-4fa7-bbd4-a5c84ba88fcb","Type":"ContainerStarted","Data":"fb904067ec53f17ee88ff7e0e36574c8e89ea3b9205165926bea05037e46d344"} Feb 02 17:18:46 crc kubenswrapper[4858]: I0202 17:18:46.248176 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:46 crc kubenswrapper[4858]: I0202 17:18:46.248219 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" event={"ID":"53a274b3-263f-4fa7-bbd4-a5c84ba88fcb","Type":"ContainerStarted","Data":"c2cfb51136b5bf4797deb5e124c5736fa53a27308220402a47c2efeca7e5ba99"} Feb 02 17:18:46 crc kubenswrapper[4858]: I0202 17:18:46.253260 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" Feb 02 17:18:46 crc kubenswrapper[4858]: I0202 17:18:46.267373 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-75894779c6-zk5dr" podStartSLOduration=36.26735465 podStartE2EDuration="36.26735465s" podCreationTimestamp="2026-02-02 17:18:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:18:46.265633324 +0000 UTC m=+227.418048639" watchObservedRunningTime="2026-02-02 17:18:46.26735465 +0000 UTC m=+227.419769915" Feb 02 17:18:48 crc kubenswrapper[4858]: I0202 17:18:48.836343 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-578bd4d5b5-lm6g4"] Feb 02 17:18:48 crc kubenswrapper[4858]: I0202 17:18:48.836916 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-578bd4d5b5-lm6g4" podUID="7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7" containerName="controller-manager" containerID="cri-o://0b7ad0ae84e3b21619cc8e4c6511d9840f3b01ace9dc212a150766cf701fcd78" gracePeriod=30 Feb 02 17:18:48 crc kubenswrapper[4858]: I0202 17:18:48.921562 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b8989f9dc-vnswc"] Feb 02 17:18:48 crc kubenswrapper[4858]: I0202 17:18:48.921814 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5b8989f9dc-vnswc" podUID="5addafbe-0ed8-44c4-a510-eb260b9c8149" containerName="route-controller-manager" containerID="cri-o://e737a754e0ad919a882f0789e2d9ddc4af8c017d01a48a4197171184450a77bb" gracePeriod=30 Feb 02 17:18:49 crc kubenswrapper[4858]: I0202 17:18:49.266317 4858 generic.go:334] "Generic (PLEG): container finished" podID="7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7" containerID="0b7ad0ae84e3b21619cc8e4c6511d9840f3b01ace9dc212a150766cf701fcd78" exitCode=0 Feb 02 17:18:49 crc kubenswrapper[4858]: I0202 17:18:49.266418 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-578bd4d5b5-lm6g4" event={"ID":"7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7","Type":"ContainerDied","Data":"0b7ad0ae84e3b21619cc8e4c6511d9840f3b01ace9dc212a150766cf701fcd78"} Feb 02 17:18:49 crc kubenswrapper[4858]: I0202 17:18:49.268130 4858 generic.go:334] "Generic (PLEG): container finished" podID="5addafbe-0ed8-44c4-a510-eb260b9c8149" containerID="e737a754e0ad919a882f0789e2d9ddc4af8c017d01a48a4197171184450a77bb" exitCode=0 Feb 02 17:18:49 crc kubenswrapper[4858]: I0202 17:18:49.268163 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b8989f9dc-vnswc" event={"ID":"5addafbe-0ed8-44c4-a510-eb260b9c8149","Type":"ContainerDied","Data":"e737a754e0ad919a882f0789e2d9ddc4af8c017d01a48a4197171184450a77bb"} Feb 02 17:18:49 crc kubenswrapper[4858]: I0202 17:18:49.384267 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-578bd4d5b5-lm6g4" Feb 02 17:18:49 crc kubenswrapper[4858]: I0202 17:18:49.389187 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b8989f9dc-vnswc" Feb 02 17:18:49 crc kubenswrapper[4858]: I0202 17:18:49.450728 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7-config\") pod \"7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7\" (UID: \"7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7\") " Feb 02 17:18:49 crc kubenswrapper[4858]: I0202 17:18:49.450836 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7-proxy-ca-bundles\") pod \"7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7\" (UID: \"7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7\") " Feb 02 17:18:49 crc kubenswrapper[4858]: I0202 17:18:49.450871 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gq4g\" (UniqueName: \"kubernetes.io/projected/5addafbe-0ed8-44c4-a510-eb260b9c8149-kube-api-access-5gq4g\") pod \"5addafbe-0ed8-44c4-a510-eb260b9c8149\" (UID: \"5addafbe-0ed8-44c4-a510-eb260b9c8149\") " Feb 02 17:18:49 crc kubenswrapper[4858]: I0202 17:18:49.450902 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7-client-ca\") pod \"7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7\" (UID: \"7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7\") " Feb 02 17:18:49 crc kubenswrapper[4858]: I0202 17:18:49.450925 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrpmp\" (UniqueName: \"kubernetes.io/projected/7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7-kube-api-access-mrpmp\") pod \"7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7\" (UID: \"7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7\") " Feb 02 17:18:49 crc kubenswrapper[4858]: I0202 17:18:49.450950 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7-serving-cert\") pod \"7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7\" (UID: \"7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7\") " Feb 02 17:18:49 crc kubenswrapper[4858]: I0202 17:18:49.451009 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5addafbe-0ed8-44c4-a510-eb260b9c8149-serving-cert\") pod \"5addafbe-0ed8-44c4-a510-eb260b9c8149\" (UID: \"5addafbe-0ed8-44c4-a510-eb260b9c8149\") " Feb 02 17:18:49 crc kubenswrapper[4858]: I0202 17:18:49.451068 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5addafbe-0ed8-44c4-a510-eb260b9c8149-client-ca\") pod \"5addafbe-0ed8-44c4-a510-eb260b9c8149\" (UID: \"5addafbe-0ed8-44c4-a510-eb260b9c8149\") " Feb 02 17:18:49 crc kubenswrapper[4858]: I0202 17:18:49.451089 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5addafbe-0ed8-44c4-a510-eb260b9c8149-config\") pod \"5addafbe-0ed8-44c4-a510-eb260b9c8149\" (UID: \"5addafbe-0ed8-44c4-a510-eb260b9c8149\") " Feb 02 17:18:49 crc kubenswrapper[4858]: I0202 17:18:49.451769 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7-client-ca" (OuterVolumeSpecName: "client-ca") pod "7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7" (UID: "7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:18:49 crc kubenswrapper[4858]: I0202 17:18:49.451892 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7-config" (OuterVolumeSpecName: "config") pod "7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7" (UID: "7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:18:49 crc kubenswrapper[4858]: I0202 17:18:49.452138 4858 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:49 crc kubenswrapper[4858]: I0202 17:18:49.452163 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:49 crc kubenswrapper[4858]: I0202 17:18:49.452429 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5addafbe-0ed8-44c4-a510-eb260b9c8149-client-ca" (OuterVolumeSpecName: "client-ca") pod "5addafbe-0ed8-44c4-a510-eb260b9c8149" (UID: "5addafbe-0ed8-44c4-a510-eb260b9c8149"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:18:49 crc kubenswrapper[4858]: I0202 17:18:49.452632 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5addafbe-0ed8-44c4-a510-eb260b9c8149-config" (OuterVolumeSpecName: "config") pod "5addafbe-0ed8-44c4-a510-eb260b9c8149" (UID: "5addafbe-0ed8-44c4-a510-eb260b9c8149"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:18:49 crc kubenswrapper[4858]: I0202 17:18:49.453352 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7" (UID: "7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:18:49 crc kubenswrapper[4858]: I0202 17:18:49.456267 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5addafbe-0ed8-44c4-a510-eb260b9c8149-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5addafbe-0ed8-44c4-a510-eb260b9c8149" (UID: "5addafbe-0ed8-44c4-a510-eb260b9c8149"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:18:49 crc kubenswrapper[4858]: I0202 17:18:49.456327 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7-kube-api-access-mrpmp" (OuterVolumeSpecName: "kube-api-access-mrpmp") pod "7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7" (UID: "7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7"). InnerVolumeSpecName "kube-api-access-mrpmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:18:49 crc kubenswrapper[4858]: I0202 17:18:49.457715 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7" (UID: "7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:18:49 crc kubenswrapper[4858]: I0202 17:18:49.466129 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5addafbe-0ed8-44c4-a510-eb260b9c8149-kube-api-access-5gq4g" (OuterVolumeSpecName: "kube-api-access-5gq4g") pod "5addafbe-0ed8-44c4-a510-eb260b9c8149" (UID: "5addafbe-0ed8-44c4-a510-eb260b9c8149"). InnerVolumeSpecName "kube-api-access-5gq4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:18:49 crc kubenswrapper[4858]: I0202 17:18:49.553360 4858 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5addafbe-0ed8-44c4-a510-eb260b9c8149-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:49 crc kubenswrapper[4858]: I0202 17:18:49.553399 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5addafbe-0ed8-44c4-a510-eb260b9c8149-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:49 crc kubenswrapper[4858]: I0202 17:18:49.553408 4858 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:49 crc kubenswrapper[4858]: I0202 17:18:49.553418 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gq4g\" (UniqueName: \"kubernetes.io/projected/5addafbe-0ed8-44c4-a510-eb260b9c8149-kube-api-access-5gq4g\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:49 crc kubenswrapper[4858]: I0202 17:18:49.553430 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrpmp\" (UniqueName: \"kubernetes.io/projected/7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7-kube-api-access-mrpmp\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:49 crc kubenswrapper[4858]: I0202 17:18:49.553438 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:49 crc kubenswrapper[4858]: I0202 17:18:49.553446 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5addafbe-0ed8-44c4-a510-eb260b9c8149-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.276402 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-578bd4d5b5-lm6g4" event={"ID":"7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7","Type":"ContainerDied","Data":"370b8cf0c089f1b61176db1c18ff73ef768722bc4f5fe4e0b76f321a8ec4ecc5"} Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.278156 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b8989f9dc-vnswc" event={"ID":"5addafbe-0ed8-44c4-a510-eb260b9c8149","Type":"ContainerDied","Data":"88835589bd07e2fa3b4946fb55719800f9b12230ea230e5815037a9a188ef082"} Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.277932 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b8989f9dc-vnswc" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.276480 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-578bd4d5b5-lm6g4" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.282421 4858 scope.go:117] "RemoveContainer" containerID="0b7ad0ae84e3b21619cc8e4c6511d9840f3b01ace9dc212a150766cf701fcd78" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.309078 4858 scope.go:117] "RemoveContainer" containerID="e737a754e0ad919a882f0789e2d9ddc4af8c017d01a48a4197171184450a77bb" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.328751 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-578bd4d5b5-lm6g4"] Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.337777 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-578bd4d5b5-lm6g4"] Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.341967 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b8989f9dc-vnswc"] Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.346367 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b8989f9dc-vnswc"] Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.409635 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5addafbe-0ed8-44c4-a510-eb260b9c8149" path="/var/lib/kubelet/pods/5addafbe-0ed8-44c4-a510-eb260b9c8149/volumes" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.410332 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7" path="/var/lib/kubelet/pods/7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7/volumes" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.656512 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-78fc645688-mrqhc"] Feb 02 17:18:50 crc kubenswrapper[4858]: E0202 17:18:50.656764 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5addafbe-0ed8-44c4-a510-eb260b9c8149" containerName="route-controller-manager" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.656780 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5addafbe-0ed8-44c4-a510-eb260b9c8149" containerName="route-controller-manager" Feb 02 17:18:50 crc kubenswrapper[4858]: E0202 17:18:50.656796 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7" containerName="controller-manager" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.656805 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7" containerName="controller-manager" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.656932 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="7740f993-ea3a-46ac-8d1d-b81bc3d5b4a7" containerName="controller-manager" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.656947 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="5addafbe-0ed8-44c4-a510-eb260b9c8149" containerName="route-controller-manager" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.657940 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78fc645688-mrqhc" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.659360 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.659926 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.662067 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.662186 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.662681 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.662738 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.663548 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d59ddbd88-hwc7f"] Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.664818 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d59ddbd88-hwc7f" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.666310 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.666688 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.667022 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.667163 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.667636 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.668231 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.668946 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.671379 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-78fc645688-mrqhc"] Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.674114 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d59ddbd88-hwc7f"] Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.765757 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ce9a6ea-5877-4965-b47d-791152819234-serving-cert\") pod \"route-controller-manager-7d59ddbd88-hwc7f\" (UID: \"0ce9a6ea-5877-4965-b47d-791152819234\") " pod="openshift-route-controller-manager/route-controller-manager-7d59ddbd88-hwc7f" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.765820 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b4648dbd-a863-412c-9709-fb3a673c4d39-client-ca\") pod \"controller-manager-78fc645688-mrqhc\" (UID: \"b4648dbd-a863-412c-9709-fb3a673c4d39\") " pod="openshift-controller-manager/controller-manager-78fc645688-mrqhc" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.765859 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4648dbd-a863-412c-9709-fb3a673c4d39-serving-cert\") pod \"controller-manager-78fc645688-mrqhc\" (UID: \"b4648dbd-a863-412c-9709-fb3a673c4d39\") " pod="openshift-controller-manager/controller-manager-78fc645688-mrqhc" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.765884 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bz78\" (UniqueName: \"kubernetes.io/projected/0ce9a6ea-5877-4965-b47d-791152819234-kube-api-access-4bz78\") pod \"route-controller-manager-7d59ddbd88-hwc7f\" (UID: \"0ce9a6ea-5877-4965-b47d-791152819234\") " pod="openshift-route-controller-manager/route-controller-manager-7d59ddbd88-hwc7f" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.765951 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ce9a6ea-5877-4965-b47d-791152819234-config\") pod \"route-controller-manager-7d59ddbd88-hwc7f\" (UID: \"0ce9a6ea-5877-4965-b47d-791152819234\") " pod="openshift-route-controller-manager/route-controller-manager-7d59ddbd88-hwc7f" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.766001 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4tcb\" (UniqueName: \"kubernetes.io/projected/b4648dbd-a863-412c-9709-fb3a673c4d39-kube-api-access-w4tcb\") pod \"controller-manager-78fc645688-mrqhc\" (UID: \"b4648dbd-a863-412c-9709-fb3a673c4d39\") " pod="openshift-controller-manager/controller-manager-78fc645688-mrqhc" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.766109 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b4648dbd-a863-412c-9709-fb3a673c4d39-proxy-ca-bundles\") pod \"controller-manager-78fc645688-mrqhc\" (UID: \"b4648dbd-a863-412c-9709-fb3a673c4d39\") " pod="openshift-controller-manager/controller-manager-78fc645688-mrqhc" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.766176 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ce9a6ea-5877-4965-b47d-791152819234-client-ca\") pod \"route-controller-manager-7d59ddbd88-hwc7f\" (UID: \"0ce9a6ea-5877-4965-b47d-791152819234\") " pod="openshift-route-controller-manager/route-controller-manager-7d59ddbd88-hwc7f" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.766330 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4648dbd-a863-412c-9709-fb3a673c4d39-config\") pod \"controller-manager-78fc645688-mrqhc\" (UID: \"b4648dbd-a863-412c-9709-fb3a673c4d39\") " pod="openshift-controller-manager/controller-manager-78fc645688-mrqhc" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.869104 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ce9a6ea-5877-4965-b47d-791152819234-config\") pod \"route-controller-manager-7d59ddbd88-hwc7f\" (UID: \"0ce9a6ea-5877-4965-b47d-791152819234\") " pod="openshift-route-controller-manager/route-controller-manager-7d59ddbd88-hwc7f" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.869226 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4tcb\" (UniqueName: \"kubernetes.io/projected/b4648dbd-a863-412c-9709-fb3a673c4d39-kube-api-access-w4tcb\") pod \"controller-manager-78fc645688-mrqhc\" (UID: \"b4648dbd-a863-412c-9709-fb3a673c4d39\") " pod="openshift-controller-manager/controller-manager-78fc645688-mrqhc" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.869268 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b4648dbd-a863-412c-9709-fb3a673c4d39-proxy-ca-bundles\") pod \"controller-manager-78fc645688-mrqhc\" (UID: \"b4648dbd-a863-412c-9709-fb3a673c4d39\") " pod="openshift-controller-manager/controller-manager-78fc645688-mrqhc" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.869293 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ce9a6ea-5877-4965-b47d-791152819234-client-ca\") pod \"route-controller-manager-7d59ddbd88-hwc7f\" (UID: \"0ce9a6ea-5877-4965-b47d-791152819234\") " pod="openshift-route-controller-manager/route-controller-manager-7d59ddbd88-hwc7f" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.869403 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4648dbd-a863-412c-9709-fb3a673c4d39-config\") pod \"controller-manager-78fc645688-mrqhc\" (UID: \"b4648dbd-a863-412c-9709-fb3a673c4d39\") " pod="openshift-controller-manager/controller-manager-78fc645688-mrqhc" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.869458 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ce9a6ea-5877-4965-b47d-791152819234-serving-cert\") pod \"route-controller-manager-7d59ddbd88-hwc7f\" (UID: \"0ce9a6ea-5877-4965-b47d-791152819234\") " pod="openshift-route-controller-manager/route-controller-manager-7d59ddbd88-hwc7f" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.869474 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b4648dbd-a863-412c-9709-fb3a673c4d39-client-ca\") pod \"controller-manager-78fc645688-mrqhc\" (UID: \"b4648dbd-a863-412c-9709-fb3a673c4d39\") " pod="openshift-controller-manager/controller-manager-78fc645688-mrqhc" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.869493 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4648dbd-a863-412c-9709-fb3a673c4d39-serving-cert\") pod \"controller-manager-78fc645688-mrqhc\" (UID: \"b4648dbd-a863-412c-9709-fb3a673c4d39\") " pod="openshift-controller-manager/controller-manager-78fc645688-mrqhc" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.869508 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bz78\" (UniqueName: \"kubernetes.io/projected/0ce9a6ea-5877-4965-b47d-791152819234-kube-api-access-4bz78\") pod \"route-controller-manager-7d59ddbd88-hwc7f\" (UID: \"0ce9a6ea-5877-4965-b47d-791152819234\") " pod="openshift-route-controller-manager/route-controller-manager-7d59ddbd88-hwc7f" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.870299 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ce9a6ea-5877-4965-b47d-791152819234-config\") pod \"route-controller-manager-7d59ddbd88-hwc7f\" (UID: \"0ce9a6ea-5877-4965-b47d-791152819234\") " pod="openshift-route-controller-manager/route-controller-manager-7d59ddbd88-hwc7f" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.870560 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b4648dbd-a863-412c-9709-fb3a673c4d39-client-ca\") pod \"controller-manager-78fc645688-mrqhc\" (UID: \"b4648dbd-a863-412c-9709-fb3a673c4d39\") " pod="openshift-controller-manager/controller-manager-78fc645688-mrqhc" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.871084 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ce9a6ea-5877-4965-b47d-791152819234-client-ca\") pod \"route-controller-manager-7d59ddbd88-hwc7f\" (UID: \"0ce9a6ea-5877-4965-b47d-791152819234\") " pod="openshift-route-controller-manager/route-controller-manager-7d59ddbd88-hwc7f" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.871627 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4648dbd-a863-412c-9709-fb3a673c4d39-config\") pod \"controller-manager-78fc645688-mrqhc\" (UID: \"b4648dbd-a863-412c-9709-fb3a673c4d39\") " pod="openshift-controller-manager/controller-manager-78fc645688-mrqhc" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.872337 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b4648dbd-a863-412c-9709-fb3a673c4d39-proxy-ca-bundles\") pod \"controller-manager-78fc645688-mrqhc\" (UID: \"b4648dbd-a863-412c-9709-fb3a673c4d39\") " pod="openshift-controller-manager/controller-manager-78fc645688-mrqhc" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.878653 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4648dbd-a863-412c-9709-fb3a673c4d39-serving-cert\") pod \"controller-manager-78fc645688-mrqhc\" (UID: \"b4648dbd-a863-412c-9709-fb3a673c4d39\") " pod="openshift-controller-manager/controller-manager-78fc645688-mrqhc" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.882741 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ce9a6ea-5877-4965-b47d-791152819234-serving-cert\") pod \"route-controller-manager-7d59ddbd88-hwc7f\" (UID: \"0ce9a6ea-5877-4965-b47d-791152819234\") " pod="openshift-route-controller-manager/route-controller-manager-7d59ddbd88-hwc7f" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.886534 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bz78\" (UniqueName: \"kubernetes.io/projected/0ce9a6ea-5877-4965-b47d-791152819234-kube-api-access-4bz78\") pod \"route-controller-manager-7d59ddbd88-hwc7f\" (UID: \"0ce9a6ea-5877-4965-b47d-791152819234\") " pod="openshift-route-controller-manager/route-controller-manager-7d59ddbd88-hwc7f" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.898013 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4tcb\" (UniqueName: \"kubernetes.io/projected/b4648dbd-a863-412c-9709-fb3a673c4d39-kube-api-access-w4tcb\") pod \"controller-manager-78fc645688-mrqhc\" (UID: \"b4648dbd-a863-412c-9709-fb3a673c4d39\") " pod="openshift-controller-manager/controller-manager-78fc645688-mrqhc" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.993796 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78fc645688-mrqhc" Feb 02 17:18:50 crc kubenswrapper[4858]: I0202 17:18:50.997320 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d59ddbd88-hwc7f" Feb 02 17:18:51 crc kubenswrapper[4858]: I0202 17:18:51.195920 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d59ddbd88-hwc7f"] Feb 02 17:18:51 crc kubenswrapper[4858]: I0202 17:18:51.287802 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d59ddbd88-hwc7f" event={"ID":"0ce9a6ea-5877-4965-b47d-791152819234","Type":"ContainerStarted","Data":"9ff1946bb64bb14596b5f699f6e8e73c9db7b15c21cc675be1eb22576e3dc938"} Feb 02 17:18:51 crc kubenswrapper[4858]: I0202 17:18:51.500473 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-78fc645688-mrqhc"] Feb 02 17:18:52 crc kubenswrapper[4858]: I0202 17:18:52.298814 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78fc645688-mrqhc" event={"ID":"b4648dbd-a863-412c-9709-fb3a673c4d39","Type":"ContainerStarted","Data":"3d3e72f40adcae4078023d5d0820694024f3d3fe4a65baf45afa9ba8b2f0a948"} Feb 02 17:18:52 crc kubenswrapper[4858]: I0202 17:18:52.298884 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78fc645688-mrqhc" event={"ID":"b4648dbd-a863-412c-9709-fb3a673c4d39","Type":"ContainerStarted","Data":"2cfbcaf0dbe3dadeb6313c887bec73dc89f6840486674a178d88522bf7921f42"} Feb 02 17:18:52 crc kubenswrapper[4858]: I0202 17:18:52.299230 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-78fc645688-mrqhc" Feb 02 17:18:52 crc kubenswrapper[4858]: I0202 17:18:52.304829 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d59ddbd88-hwc7f" event={"ID":"0ce9a6ea-5877-4965-b47d-791152819234","Type":"ContainerStarted","Data":"2ec0814bce23d54762c76410cfda7691376f1bc9c942d4ad3306f7618f4d3530"} Feb 02 17:18:52 crc kubenswrapper[4858]: I0202 17:18:52.305296 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7d59ddbd88-hwc7f" Feb 02 17:18:52 crc kubenswrapper[4858]: I0202 17:18:52.308133 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-78fc645688-mrqhc" Feb 02 17:18:52 crc kubenswrapper[4858]: I0202 17:18:52.312208 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7d59ddbd88-hwc7f" Feb 02 17:18:52 crc kubenswrapper[4858]: I0202 17:18:52.321961 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-78fc645688-mrqhc" podStartSLOduration=4.32194358 podStartE2EDuration="4.32194358s" podCreationTimestamp="2026-02-02 17:18:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:18:52.321246668 +0000 UTC m=+233.473661933" watchObservedRunningTime="2026-02-02 17:18:52.32194358 +0000 UTC m=+233.474358845" Feb 02 17:18:52 crc kubenswrapper[4858]: I0202 17:18:52.350054 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7d59ddbd88-hwc7f" podStartSLOduration=4.350026592 podStartE2EDuration="4.350026592s" podCreationTimestamp="2026-02-02 17:18:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:18:52.348854954 +0000 UTC m=+233.501270219" watchObservedRunningTime="2026-02-02 17:18:52.350026592 +0000 UTC m=+233.502441887" Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.659576 4858 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.661148 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.662946 4858 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.663457 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea" gracePeriod=15 Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.663506 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e" gracePeriod=15 Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.663553 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903" gracePeriod=15 Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.663590 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8" gracePeriod=15 Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.663584 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e" gracePeriod=15 Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.665008 4858 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 17:19:03 crc kubenswrapper[4858]: E0202 17:19:03.665439 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.665456 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 17:19:03 crc kubenswrapper[4858]: E0202 17:19:03.665472 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.665480 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 02 17:19:03 crc kubenswrapper[4858]: E0202 17:19:03.665495 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.665503 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 17:19:03 crc kubenswrapper[4858]: E0202 17:19:03.665518 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.665525 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 02 17:19:03 crc kubenswrapper[4858]: E0202 17:19:03.665535 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.665542 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 02 17:19:03 crc kubenswrapper[4858]: E0202 17:19:03.665553 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.665560 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.665675 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.665692 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.665704 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.665713 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.665726 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.665738 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 17:19:03 crc kubenswrapper[4858]: E0202 17:19:03.665855 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.665866 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.765845 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.765893 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.765946 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.866874 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.867255 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.866986 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.867320 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.867373 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.867391 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.867481 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.867531 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.867562 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.867580 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.867621 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.968564 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.968654 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.968752 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.968771 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.968844 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.968879 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.968925 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.968945 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.969083 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 17:19:03 crc kubenswrapper[4858]: I0202 17:19:03.969159 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 17:19:04 crc kubenswrapper[4858]: I0202 17:19:04.386586 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 02 17:19:04 crc kubenswrapper[4858]: I0202 17:19:04.388906 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 17:19:04 crc kubenswrapper[4858]: I0202 17:19:04.390216 4858 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e" exitCode=0 Feb 02 17:19:04 crc kubenswrapper[4858]: I0202 17:19:04.390255 4858 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8" exitCode=0 Feb 02 17:19:04 crc kubenswrapper[4858]: I0202 17:19:04.390273 4858 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e" exitCode=0 Feb 02 17:19:04 crc kubenswrapper[4858]: I0202 17:19:04.390287 4858 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903" exitCode=2 Feb 02 17:19:04 crc kubenswrapper[4858]: I0202 17:19:04.390369 4858 scope.go:117] "RemoveContainer" containerID="44c845208a2276260c2da187012b9667984470139c722ee4207c5ed406ea9c87" Feb 02 17:19:04 crc kubenswrapper[4858]: I0202 17:19:04.392665 4858 generic.go:334] "Generic (PLEG): container finished" podID="85579d4b-0219-4f36-8251-755e28bbe3ba" containerID="c8558e1baececfa493b81898a267feecc1d4b0e1a661f4f46aa25821b220155a" exitCode=0 Feb 02 17:19:04 crc kubenswrapper[4858]: I0202 17:19:04.392710 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"85579d4b-0219-4f36-8251-755e28bbe3ba","Type":"ContainerDied","Data":"c8558e1baececfa493b81898a267feecc1d4b0e1a661f4f46aa25821b220155a"} Feb 02 17:19:04 crc kubenswrapper[4858]: I0202 17:19:04.393721 4858 status_manager.go:851] "Failed to get status for pod" podUID="85579d4b-0219-4f36-8251-755e28bbe3ba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Feb 02 17:19:04 crc kubenswrapper[4858]: I0202 17:19:04.394280 4858 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Feb 02 17:19:05 crc kubenswrapper[4858]: I0202 17:19:05.402076 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 17:19:05 crc kubenswrapper[4858]: I0202 17:19:05.824867 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 17:19:05 crc kubenswrapper[4858]: I0202 17:19:05.827584 4858 status_manager.go:851] "Failed to get status for pod" podUID="85579d4b-0219-4f36-8251-755e28bbe3ba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Feb 02 17:19:05 crc kubenswrapper[4858]: I0202 17:19:05.910326 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/85579d4b-0219-4f36-8251-755e28bbe3ba-kubelet-dir\") pod \"85579d4b-0219-4f36-8251-755e28bbe3ba\" (UID: \"85579d4b-0219-4f36-8251-755e28bbe3ba\") " Feb 02 17:19:05 crc kubenswrapper[4858]: I0202 17:19:05.910387 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85579d4b-0219-4f36-8251-755e28bbe3ba-kube-api-access\") pod \"85579d4b-0219-4f36-8251-755e28bbe3ba\" (UID: \"85579d4b-0219-4f36-8251-755e28bbe3ba\") " Feb 02 17:19:05 crc kubenswrapper[4858]: I0202 17:19:05.910465 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/85579d4b-0219-4f36-8251-755e28bbe3ba-var-lock\") pod \"85579d4b-0219-4f36-8251-755e28bbe3ba\" (UID: \"85579d4b-0219-4f36-8251-755e28bbe3ba\") " Feb 02 17:19:05 crc kubenswrapper[4858]: I0202 17:19:05.910556 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85579d4b-0219-4f36-8251-755e28bbe3ba-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "85579d4b-0219-4f36-8251-755e28bbe3ba" (UID: "85579d4b-0219-4f36-8251-755e28bbe3ba"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 17:19:05 crc kubenswrapper[4858]: I0202 17:19:05.910629 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85579d4b-0219-4f36-8251-755e28bbe3ba-var-lock" (OuterVolumeSpecName: "var-lock") pod "85579d4b-0219-4f36-8251-755e28bbe3ba" (UID: "85579d4b-0219-4f36-8251-755e28bbe3ba"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 17:19:05 crc kubenswrapper[4858]: I0202 17:19:05.910907 4858 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/85579d4b-0219-4f36-8251-755e28bbe3ba-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 17:19:05 crc kubenswrapper[4858]: I0202 17:19:05.910938 4858 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/85579d4b-0219-4f36-8251-755e28bbe3ba-var-lock\") on node \"crc\" DevicePath \"\"" Feb 02 17:19:05 crc kubenswrapper[4858]: I0202 17:19:05.921258 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85579d4b-0219-4f36-8251-755e28bbe3ba-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "85579d4b-0219-4f36-8251-755e28bbe3ba" (UID: "85579d4b-0219-4f36-8251-755e28bbe3ba"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.011507 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85579d4b-0219-4f36-8251-755e28bbe3ba-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.060817 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.061882 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.062708 4858 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.063261 4858 status_manager.go:851] "Failed to get status for pod" podUID="85579d4b-0219-4f36-8251-755e28bbe3ba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.112541 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.112678 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.112685 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.112767 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.112832 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.112907 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.113199 4858 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.113218 4858 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.113229 4858 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.411329 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.413701 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.414944 4858 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea" exitCode=0 Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.415083 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.415079 4858 scope.go:117] "RemoveContainer" containerID="4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e" Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.416257 4858 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.416710 4858 status_manager.go:851] "Failed to get status for pod" podUID="85579d4b-0219-4f36-8251-755e28bbe3ba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.418963 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"85579d4b-0219-4f36-8251-755e28bbe3ba","Type":"ContainerDied","Data":"19e7f7005a7325387f5d905a5fff00431048c06d2f3520536b801ad6ae49424c"} Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.419047 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19e7f7005a7325387f5d905a5fff00431048c06d2f3520536b801ad6ae49424c" Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.419149 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.439334 4858 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.439679 4858 status_manager.go:851] "Failed to get status for pod" podUID="85579d4b-0219-4f36-8251-755e28bbe3ba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.444247 4858 scope.go:117] "RemoveContainer" containerID="d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8" Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.449528 4858 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.449960 4858 status_manager.go:851] "Failed to get status for pod" podUID="85579d4b-0219-4f36-8251-755e28bbe3ba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.462908 4858 scope.go:117] "RemoveContainer" containerID="91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e" Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.480910 4858 scope.go:117] "RemoveContainer" containerID="4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903" Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.500589 4858 scope.go:117] "RemoveContainer" containerID="2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea" Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.518161 4858 scope.go:117] "RemoveContainer" containerID="f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75" Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.538081 4858 scope.go:117] "RemoveContainer" containerID="4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e" Feb 02 17:19:06 crc kubenswrapper[4858]: E0202 17:19:06.538556 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e\": container with ID starting with 4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e not found: ID does not exist" containerID="4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e" Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.538599 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e"} err="failed to get container status \"4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e\": rpc error: code = NotFound desc = could not find container \"4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e\": container with ID starting with 4c322b33d9d44f56d83603cab0de2734a605ca3f97fc39c45b3e794d6e1dbd4e not found: ID does not exist" Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.538627 4858 scope.go:117] "RemoveContainer" containerID="d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8" Feb 02 17:19:06 crc kubenswrapper[4858]: E0202 17:19:06.538969 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8\": container with ID starting with d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8 not found: ID does not exist" containerID="d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8" Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.539034 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8"} err="failed to get container status \"d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8\": rpc error: code = NotFound desc = could not find container \"d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8\": container with ID starting with d5b002aafd11640a03603cb4e3a23268399ddd9ae9e51e7f5ce9969dfc339ba8 not found: ID does not exist" Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.539060 4858 scope.go:117] "RemoveContainer" containerID="91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e" Feb 02 17:19:06 crc kubenswrapper[4858]: E0202 17:19:06.539313 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e\": container with ID starting with 91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e not found: ID does not exist" containerID="91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e" Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.539338 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e"} err="failed to get container status \"91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e\": rpc error: code = NotFound desc = could not find container \"91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e\": container with ID starting with 91ed94d9405936de5ce1574169e5afcc66f8e292829d78e483fa85d6f71d327e not found: ID does not exist" Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.539354 4858 scope.go:117] "RemoveContainer" containerID="4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903" Feb 02 17:19:06 crc kubenswrapper[4858]: E0202 17:19:06.539615 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903\": container with ID starting with 4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903 not found: ID does not exist" containerID="4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903" Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.539668 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903"} err="failed to get container status \"4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903\": rpc error: code = NotFound desc = could not find container \"4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903\": container with ID starting with 4b3136b9e8a893f4ae805fffd540d758f868e865f343d5a459638ce9dc06e903 not found: ID does not exist" Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.539694 4858 scope.go:117] "RemoveContainer" containerID="2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea" Feb 02 17:19:06 crc kubenswrapper[4858]: E0202 17:19:06.539957 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea\": container with ID starting with 2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea not found: ID does not exist" containerID="2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea" Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.540006 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea"} err="failed to get container status \"2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea\": rpc error: code = NotFound desc = could not find container \"2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea\": container with ID starting with 2f76e3b510d2be8257c42947a7fe512c11d82973acdccd244ec2b5c91245e0ea not found: ID does not exist" Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.540030 4858 scope.go:117] "RemoveContainer" containerID="f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75" Feb 02 17:19:06 crc kubenswrapper[4858]: E0202 17:19:06.540300 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\": container with ID starting with f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75 not found: ID does not exist" containerID="f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75" Feb 02 17:19:06 crc kubenswrapper[4858]: I0202 17:19:06.540341 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75"} err="failed to get container status \"f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\": rpc error: code = NotFound desc = could not find container \"f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75\": container with ID starting with f2719dfa2fad9be35810511747b6851e64fa922a189929e75f1c94da3c97df75 not found: ID does not exist" Feb 02 17:19:08 crc kubenswrapper[4858]: E0202 17:19:08.768494 4858 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.13:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 17:19:08 crc kubenswrapper[4858]: I0202 17:19:08.769683 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 17:19:08 crc kubenswrapper[4858]: W0202 17:19:08.792754 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-7cc04687642f51e4e809f0a09067e6d45b83709cad8aa84d220142c66d6c7333 WatchSource:0}: Error finding container 7cc04687642f51e4e809f0a09067e6d45b83709cad8aa84d220142c66d6c7333: Status 404 returned error can't find the container with id 7cc04687642f51e4e809f0a09067e6d45b83709cad8aa84d220142c66d6c7333 Feb 02 17:19:08 crc kubenswrapper[4858]: E0202 17:19:08.796621 4858 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.13:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18907d98cdafd3ab openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 17:19:08.795958187 +0000 UTC m=+249.948373462,LastTimestamp:2026-02-02 17:19:08.795958187 +0000 UTC m=+249.948373462,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 17:19:09 crc kubenswrapper[4858]: I0202 17:19:09.438585 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"9e5faa8ff18d17e744c69079f73c90e02264905a93efd6f32f18a56aef774107"} Feb 02 17:19:09 crc kubenswrapper[4858]: I0202 17:19:09.438922 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"7cc04687642f51e4e809f0a09067e6d45b83709cad8aa84d220142c66d6c7333"} Feb 02 17:19:09 crc kubenswrapper[4858]: E0202 17:19:09.439590 4858 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.13:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 17:19:09 crc kubenswrapper[4858]: I0202 17:19:09.439629 4858 status_manager.go:851] "Failed to get status for pod" podUID="85579d4b-0219-4f36-8251-755e28bbe3ba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Feb 02 17:19:10 crc kubenswrapper[4858]: I0202 17:19:10.402034 4858 status_manager.go:851] "Failed to get status for pod" podUID="85579d4b-0219-4f36-8251-755e28bbe3ba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Feb 02 17:19:10 crc kubenswrapper[4858]: E0202 17:19:10.525953 4858 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" Feb 02 17:19:10 crc kubenswrapper[4858]: E0202 17:19:10.527027 4858 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" Feb 02 17:19:10 crc kubenswrapper[4858]: E0202 17:19:10.527839 4858 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" Feb 02 17:19:10 crc kubenswrapper[4858]: E0202 17:19:10.528341 4858 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" Feb 02 17:19:10 crc kubenswrapper[4858]: E0202 17:19:10.528862 4858 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" Feb 02 17:19:10 crc kubenswrapper[4858]: I0202 17:19:10.528901 4858 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 02 17:19:10 crc kubenswrapper[4858]: E0202 17:19:10.529291 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" interval="200ms" Feb 02 17:19:10 crc kubenswrapper[4858]: E0202 17:19:10.730487 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" interval="400ms" Feb 02 17:19:11 crc kubenswrapper[4858]: E0202 17:19:11.131183 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" interval="800ms" Feb 02 17:19:11 crc kubenswrapper[4858]: E0202 17:19:11.932567 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" interval="1.6s" Feb 02 17:19:13 crc kubenswrapper[4858]: E0202 17:19:13.534301 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" interval="3.2s" Feb 02 17:19:16 crc kubenswrapper[4858]: E0202 17:19:16.735608 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.13:6443: connect: connection refused" interval="6.4s" Feb 02 17:19:17 crc kubenswrapper[4858]: I0202 17:19:17.400409 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 17:19:17 crc kubenswrapper[4858]: I0202 17:19:17.401860 4858 status_manager.go:851] "Failed to get status for pod" podUID="85579d4b-0219-4f36-8251-755e28bbe3ba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Feb 02 17:19:17 crc kubenswrapper[4858]: I0202 17:19:17.425210 4858 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5cc5a816-d9d0-41c0-877c-250e077ef445" Feb 02 17:19:17 crc kubenswrapper[4858]: I0202 17:19:17.425262 4858 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5cc5a816-d9d0-41c0-877c-250e077ef445" Feb 02 17:19:17 crc kubenswrapper[4858]: E0202 17:19:17.425891 4858 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 17:19:17 crc kubenswrapper[4858]: I0202 17:19:17.426709 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 17:19:17 crc kubenswrapper[4858]: E0202 17:19:17.926680 4858 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.13:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18907d98cdafd3ab openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 17:19:08.795958187 +0000 UTC m=+249.948373462,LastTimestamp:2026-02-02 17:19:08.795958187 +0000 UTC m=+249.948373462,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 17:19:18 crc kubenswrapper[4858]: I0202 17:19:18.494343 4858 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 02 17:19:18 crc kubenswrapper[4858]: I0202 17:19:18.494931 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 02 17:19:18 crc kubenswrapper[4858]: I0202 17:19:18.502738 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 02 17:19:18 crc kubenswrapper[4858]: I0202 17:19:18.503079 4858 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7" exitCode=1 Feb 02 17:19:18 crc kubenswrapper[4858]: I0202 17:19:18.503206 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7"} Feb 02 17:19:18 crc kubenswrapper[4858]: I0202 17:19:18.503894 4858 scope.go:117] "RemoveContainer" containerID="8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7" Feb 02 17:19:18 crc kubenswrapper[4858]: I0202 17:19:18.504387 4858 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Feb 02 17:19:18 crc kubenswrapper[4858]: I0202 17:19:18.505197 4858 status_manager.go:851] "Failed to get status for pod" podUID="85579d4b-0219-4f36-8251-755e28bbe3ba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Feb 02 17:19:18 crc kubenswrapper[4858]: I0202 17:19:18.510358 4858 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="f12650da8f3aa0fe5572e240286118de87fb594c5b42078841f13c4951b23cd0" exitCode=0 Feb 02 17:19:18 crc kubenswrapper[4858]: I0202 17:19:18.510391 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"f12650da8f3aa0fe5572e240286118de87fb594c5b42078841f13c4951b23cd0"} Feb 02 17:19:18 crc kubenswrapper[4858]: I0202 17:19:18.510409 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a0f4d7a7c435f5c838b03e5ee7727c1bcb3160f6ff7394277eb6b9dc27701e3c"} Feb 02 17:19:18 crc kubenswrapper[4858]: I0202 17:19:18.510634 4858 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5cc5a816-d9d0-41c0-877c-250e077ef445" Feb 02 17:19:18 crc kubenswrapper[4858]: I0202 17:19:18.510653 4858 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5cc5a816-d9d0-41c0-877c-250e077ef445" Feb 02 17:19:18 crc kubenswrapper[4858]: E0202 17:19:18.511015 4858 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 17:19:18 crc kubenswrapper[4858]: I0202 17:19:18.511267 4858 status_manager.go:851] "Failed to get status for pod" podUID="85579d4b-0219-4f36-8251-755e28bbe3ba" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Feb 02 17:19:18 crc kubenswrapper[4858]: I0202 17:19:18.512121 4858 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.13:6443: connect: connection refused" Feb 02 17:19:19 crc kubenswrapper[4858]: I0202 17:19:19.527811 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 02 17:19:19 crc kubenswrapper[4858]: I0202 17:19:19.528393 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9306be8b1142988ab2bc00a55929a220c03f42afa3abec4812949d2eaf6eb638"} Feb 02 17:19:19 crc kubenswrapper[4858]: I0202 17:19:19.531243 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1e19522373b2aa0ec4fb7468c86bda22a1b3a6ed9a240c87186ed7ff587ae467"} Feb 02 17:19:19 crc kubenswrapper[4858]: I0202 17:19:19.531278 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8afc4e04b5b5c2fbfeed65128874dd370092579ca666f30d9354cc9cd5a313d2"} Feb 02 17:19:19 crc kubenswrapper[4858]: I0202 17:19:19.531288 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9c3762ea59cf8ca43173d64a57140adf9849edead408bede707b4f1f4eff50a7"} Feb 02 17:19:20 crc kubenswrapper[4858]: I0202 17:19:20.561711 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"86b8d769803237dd80ef18fa2432eff8ddd8fd0d41ad3dbdfc8f842521850904"} Feb 02 17:19:20 crc kubenswrapper[4858]: I0202 17:19:20.562537 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fda022a733adf711a8d1f76ceb2cc81ca34552b88b83ecada57d56126189c07f"} Feb 02 17:19:20 crc kubenswrapper[4858]: I0202 17:19:20.562631 4858 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5cc5a816-d9d0-41c0-877c-250e077ef445" Feb 02 17:19:20 crc kubenswrapper[4858]: I0202 17:19:20.562652 4858 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5cc5a816-d9d0-41c0-877c-250e077ef445" Feb 02 17:19:20 crc kubenswrapper[4858]: I0202 17:19:20.562789 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 17:19:22 crc kubenswrapper[4858]: I0202 17:19:22.427836 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 17:19:22 crc kubenswrapper[4858]: I0202 17:19:22.428269 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 17:19:22 crc kubenswrapper[4858]: I0202 17:19:22.435452 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 17:19:25 crc kubenswrapper[4858]: I0202 17:19:25.578527 4858 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 17:19:25 crc kubenswrapper[4858]: I0202 17:19:25.669592 4858 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="9ab7a209-f161-4bff-8d53-cddf2cfae7b4" Feb 02 17:19:26 crc kubenswrapper[4858]: I0202 17:19:26.602611 4858 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5cc5a816-d9d0-41c0-877c-250e077ef445" Feb 02 17:19:26 crc kubenswrapper[4858]: I0202 17:19:26.602963 4858 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5cc5a816-d9d0-41c0-877c-250e077ef445" Feb 02 17:19:26 crc kubenswrapper[4858]: I0202 17:19:26.608226 4858 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="9ab7a209-f161-4bff-8d53-cddf2cfae7b4" Feb 02 17:19:26 crc kubenswrapper[4858]: I0202 17:19:26.610383 4858 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://9c3762ea59cf8ca43173d64a57140adf9849edead408bede707b4f1f4eff50a7" Feb 02 17:19:26 crc kubenswrapper[4858]: I0202 17:19:26.610424 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 17:19:27 crc kubenswrapper[4858]: I0202 17:19:27.609601 4858 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5cc5a816-d9d0-41c0-877c-250e077ef445" Feb 02 17:19:27 crc kubenswrapper[4858]: I0202 17:19:27.609635 4858 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5cc5a816-d9d0-41c0-877c-250e077ef445" Feb 02 17:19:27 crc kubenswrapper[4858]: I0202 17:19:27.615610 4858 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="9ab7a209-f161-4bff-8d53-cddf2cfae7b4" Feb 02 17:19:27 crc kubenswrapper[4858]: I0202 17:19:27.872716 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 17:19:27 crc kubenswrapper[4858]: I0202 17:19:27.873214 4858 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 02 17:19:27 crc kubenswrapper[4858]: I0202 17:19:27.873291 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 02 17:19:28 crc kubenswrapper[4858]: I0202 17:19:28.493794 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 17:19:35 crc kubenswrapper[4858]: I0202 17:19:35.609141 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 02 17:19:35 crc kubenswrapper[4858]: I0202 17:19:35.834451 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 02 17:19:36 crc kubenswrapper[4858]: I0202 17:19:36.044445 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 02 17:19:36 crc kubenswrapper[4858]: I0202 17:19:36.693662 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 02 17:19:36 crc kubenswrapper[4858]: I0202 17:19:36.773163 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 02 17:19:36 crc kubenswrapper[4858]: I0202 17:19:36.809247 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 02 17:19:36 crc kubenswrapper[4858]: I0202 17:19:36.918692 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 02 17:19:37 crc kubenswrapper[4858]: I0202 17:19:37.051844 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 02 17:19:37 crc kubenswrapper[4858]: I0202 17:19:37.290407 4858 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 02 17:19:37 crc kubenswrapper[4858]: I0202 17:19:37.396905 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 02 17:19:37 crc kubenswrapper[4858]: I0202 17:19:37.560790 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 02 17:19:37 crc kubenswrapper[4858]: I0202 17:19:37.562437 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 02 17:19:37 crc kubenswrapper[4858]: I0202 17:19:37.810220 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 02 17:19:37 crc kubenswrapper[4858]: I0202 17:19:37.820648 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 02 17:19:37 crc kubenswrapper[4858]: I0202 17:19:37.853804 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 02 17:19:37 crc kubenswrapper[4858]: I0202 17:19:37.872912 4858 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 02 17:19:37 crc kubenswrapper[4858]: I0202 17:19:37.873019 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 02 17:19:38 crc kubenswrapper[4858]: I0202 17:19:38.068865 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 02 17:19:38 crc kubenswrapper[4858]: I0202 17:19:38.222801 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 02 17:19:38 crc kubenswrapper[4858]: I0202 17:19:38.316643 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 02 17:19:38 crc kubenswrapper[4858]: I0202 17:19:38.355225 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 02 17:19:38 crc kubenswrapper[4858]: I0202 17:19:38.370961 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 02 17:19:38 crc kubenswrapper[4858]: I0202 17:19:38.402652 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 02 17:19:38 crc kubenswrapper[4858]: I0202 17:19:38.488592 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 02 17:19:38 crc kubenswrapper[4858]: I0202 17:19:38.496378 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 02 17:19:38 crc kubenswrapper[4858]: I0202 17:19:38.508552 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 02 17:19:38 crc kubenswrapper[4858]: I0202 17:19:38.708053 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 02 17:19:38 crc kubenswrapper[4858]: I0202 17:19:38.709683 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 02 17:19:38 crc kubenswrapper[4858]: I0202 17:19:38.739477 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 02 17:19:38 crc kubenswrapper[4858]: I0202 17:19:38.902121 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 02 17:19:39 crc kubenswrapper[4858]: I0202 17:19:39.032672 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 02 17:19:39 crc kubenswrapper[4858]: I0202 17:19:39.215218 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 02 17:19:39 crc kubenswrapper[4858]: I0202 17:19:39.319084 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 02 17:19:39 crc kubenswrapper[4858]: I0202 17:19:39.453591 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 02 17:19:39 crc kubenswrapper[4858]: I0202 17:19:39.472219 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 02 17:19:39 crc kubenswrapper[4858]: I0202 17:19:39.478550 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 02 17:19:39 crc kubenswrapper[4858]: I0202 17:19:39.595242 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 02 17:19:39 crc kubenswrapper[4858]: I0202 17:19:39.629463 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 02 17:19:39 crc kubenswrapper[4858]: I0202 17:19:39.745281 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 02 17:19:39 crc kubenswrapper[4858]: I0202 17:19:39.771144 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 17:19:39 crc kubenswrapper[4858]: I0202 17:19:39.772137 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 02 17:19:39 crc kubenswrapper[4858]: I0202 17:19:39.784322 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 02 17:19:39 crc kubenswrapper[4858]: I0202 17:19:39.876171 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 02 17:19:39 crc kubenswrapper[4858]: I0202 17:19:39.879154 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 02 17:19:39 crc kubenswrapper[4858]: I0202 17:19:39.882025 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 02 17:19:40 crc kubenswrapper[4858]: I0202 17:19:40.071203 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 02 17:19:40 crc kubenswrapper[4858]: I0202 17:19:40.291614 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 02 17:19:40 crc kubenswrapper[4858]: I0202 17:19:40.389043 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 02 17:19:40 crc kubenswrapper[4858]: I0202 17:19:40.420511 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 02 17:19:40 crc kubenswrapper[4858]: I0202 17:19:40.428112 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 02 17:19:40 crc kubenswrapper[4858]: I0202 17:19:40.590770 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 02 17:19:40 crc kubenswrapper[4858]: I0202 17:19:40.635757 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 02 17:19:40 crc kubenswrapper[4858]: I0202 17:19:40.816902 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 02 17:19:40 crc kubenswrapper[4858]: I0202 17:19:40.831448 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 02 17:19:40 crc kubenswrapper[4858]: I0202 17:19:40.874309 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 02 17:19:40 crc kubenswrapper[4858]: I0202 17:19:40.903006 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 02 17:19:41 crc kubenswrapper[4858]: I0202 17:19:41.024450 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 02 17:19:41 crc kubenswrapper[4858]: I0202 17:19:41.031201 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 02 17:19:41 crc kubenswrapper[4858]: I0202 17:19:41.197156 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 02 17:19:41 crc kubenswrapper[4858]: I0202 17:19:41.279496 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 02 17:19:41 crc kubenswrapper[4858]: I0202 17:19:41.291204 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 02 17:19:41 crc kubenswrapper[4858]: I0202 17:19:41.307130 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 02 17:19:41 crc kubenswrapper[4858]: I0202 17:19:41.406465 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 02 17:19:41 crc kubenswrapper[4858]: I0202 17:19:41.408262 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 17:19:41 crc kubenswrapper[4858]: I0202 17:19:41.414810 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 02 17:19:41 crc kubenswrapper[4858]: I0202 17:19:41.420826 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 02 17:19:41 crc kubenswrapper[4858]: I0202 17:19:41.452692 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 02 17:19:41 crc kubenswrapper[4858]: I0202 17:19:41.501306 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 02 17:19:41 crc kubenswrapper[4858]: I0202 17:19:41.585197 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 02 17:19:41 crc kubenswrapper[4858]: I0202 17:19:41.674879 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 02 17:19:41 crc kubenswrapper[4858]: I0202 17:19:41.799865 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 02 17:19:41 crc kubenswrapper[4858]: I0202 17:19:41.965424 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 17:19:42 crc kubenswrapper[4858]: I0202 17:19:42.009053 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 02 17:19:42 crc kubenswrapper[4858]: I0202 17:19:42.010640 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 02 17:19:42 crc kubenswrapper[4858]: I0202 17:19:42.020578 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 02 17:19:42 crc kubenswrapper[4858]: I0202 17:19:42.051917 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 02 17:19:42 crc kubenswrapper[4858]: I0202 17:19:42.153183 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 02 17:19:42 crc kubenswrapper[4858]: I0202 17:19:42.204180 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 02 17:19:42 crc kubenswrapper[4858]: I0202 17:19:42.277173 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 02 17:19:42 crc kubenswrapper[4858]: I0202 17:19:42.281126 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 02 17:19:42 crc kubenswrapper[4858]: I0202 17:19:42.281412 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 02 17:19:42 crc kubenswrapper[4858]: I0202 17:19:42.290863 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 02 17:19:42 crc kubenswrapper[4858]: I0202 17:19:42.429818 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 02 17:19:42 crc kubenswrapper[4858]: I0202 17:19:42.465638 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 02 17:19:42 crc kubenswrapper[4858]: I0202 17:19:42.466114 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 02 17:19:42 crc kubenswrapper[4858]: I0202 17:19:42.534529 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 17:19:42 crc kubenswrapper[4858]: I0202 17:19:42.596004 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 02 17:19:42 crc kubenswrapper[4858]: I0202 17:19:42.708229 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 17:19:42 crc kubenswrapper[4858]: I0202 17:19:42.745874 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 02 17:19:42 crc kubenswrapper[4858]: I0202 17:19:42.777944 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 17:19:42 crc kubenswrapper[4858]: I0202 17:19:42.850038 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 02 17:19:42 crc kubenswrapper[4858]: I0202 17:19:42.862584 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 02 17:19:42 crc kubenswrapper[4858]: I0202 17:19:42.891460 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 02 17:19:42 crc kubenswrapper[4858]: I0202 17:19:42.925527 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 02 17:19:42 crc kubenswrapper[4858]: I0202 17:19:42.937384 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 02 17:19:42 crc kubenswrapper[4858]: I0202 17:19:42.954803 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 02 17:19:43 crc kubenswrapper[4858]: I0202 17:19:43.015184 4858 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 02 17:19:43 crc kubenswrapper[4858]: I0202 17:19:43.058147 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 02 17:19:43 crc kubenswrapper[4858]: I0202 17:19:43.162777 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 02 17:19:43 crc kubenswrapper[4858]: I0202 17:19:43.213215 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 02 17:19:43 crc kubenswrapper[4858]: I0202 17:19:43.213285 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 02 17:19:43 crc kubenswrapper[4858]: I0202 17:19:43.352351 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 02 17:19:43 crc kubenswrapper[4858]: I0202 17:19:43.368059 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 02 17:19:43 crc kubenswrapper[4858]: I0202 17:19:43.376969 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 02 17:19:43 crc kubenswrapper[4858]: I0202 17:19:43.530210 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 02 17:19:43 crc kubenswrapper[4858]: I0202 17:19:43.622302 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 02 17:19:43 crc kubenswrapper[4858]: I0202 17:19:43.664075 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 02 17:19:43 crc kubenswrapper[4858]: I0202 17:19:43.671391 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 02 17:19:43 crc kubenswrapper[4858]: I0202 17:19:43.771167 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 02 17:19:43 crc kubenswrapper[4858]: I0202 17:19:43.781065 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 02 17:19:43 crc kubenswrapper[4858]: I0202 17:19:43.836244 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 02 17:19:43 crc kubenswrapper[4858]: I0202 17:19:43.870524 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 02 17:19:43 crc kubenswrapper[4858]: I0202 17:19:43.942793 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 02 17:19:43 crc kubenswrapper[4858]: I0202 17:19:43.957365 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 02 17:19:43 crc kubenswrapper[4858]: I0202 17:19:43.963154 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 02 17:19:44 crc kubenswrapper[4858]: I0202 17:19:44.022511 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 02 17:19:44 crc kubenswrapper[4858]: I0202 17:19:44.026435 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 02 17:19:44 crc kubenswrapper[4858]: I0202 17:19:44.107526 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 02 17:19:44 crc kubenswrapper[4858]: I0202 17:19:44.203651 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 02 17:19:44 crc kubenswrapper[4858]: I0202 17:19:44.238408 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 02 17:19:44 crc kubenswrapper[4858]: I0202 17:19:44.255608 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 02 17:19:44 crc kubenswrapper[4858]: I0202 17:19:44.278511 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 02 17:19:44 crc kubenswrapper[4858]: I0202 17:19:44.284251 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 02 17:19:44 crc kubenswrapper[4858]: I0202 17:19:44.316266 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 02 17:19:44 crc kubenswrapper[4858]: I0202 17:19:44.362993 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 02 17:19:44 crc kubenswrapper[4858]: I0202 17:19:44.384062 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 02 17:19:44 crc kubenswrapper[4858]: I0202 17:19:44.490786 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 02 17:19:44 crc kubenswrapper[4858]: I0202 17:19:44.586837 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 17:19:44 crc kubenswrapper[4858]: I0202 17:19:44.644951 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 17:19:44 crc kubenswrapper[4858]: I0202 17:19:44.687310 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 02 17:19:44 crc kubenswrapper[4858]: I0202 17:19:44.765051 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 02 17:19:44 crc kubenswrapper[4858]: I0202 17:19:44.775940 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 02 17:19:44 crc kubenswrapper[4858]: I0202 17:19:44.794615 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 02 17:19:44 crc kubenswrapper[4858]: I0202 17:19:44.841117 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 02 17:19:44 crc kubenswrapper[4858]: I0202 17:19:44.857580 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 02 17:19:44 crc kubenswrapper[4858]: I0202 17:19:44.877220 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 02 17:19:44 crc kubenswrapper[4858]: I0202 17:19:44.881548 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 02 17:19:44 crc kubenswrapper[4858]: I0202 17:19:44.915878 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 02 17:19:45 crc kubenswrapper[4858]: I0202 17:19:45.027223 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 02 17:19:45 crc kubenswrapper[4858]: I0202 17:19:45.194877 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 02 17:19:45 crc kubenswrapper[4858]: I0202 17:19:45.229517 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 02 17:19:45 crc kubenswrapper[4858]: I0202 17:19:45.241720 4858 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 02 17:19:45 crc kubenswrapper[4858]: I0202 17:19:45.241750 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 02 17:19:45 crc kubenswrapper[4858]: I0202 17:19:45.280517 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 02 17:19:45 crc kubenswrapper[4858]: I0202 17:19:45.379608 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 02 17:19:45 crc kubenswrapper[4858]: I0202 17:19:45.394946 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 02 17:19:45 crc kubenswrapper[4858]: I0202 17:19:45.446611 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 02 17:19:45 crc kubenswrapper[4858]: I0202 17:19:45.479376 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 17:19:45 crc kubenswrapper[4858]: I0202 17:19:45.512083 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 02 17:19:45 crc kubenswrapper[4858]: I0202 17:19:45.586560 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 02 17:19:45 crc kubenswrapper[4858]: I0202 17:19:45.600547 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 02 17:19:45 crc kubenswrapper[4858]: I0202 17:19:45.606533 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 02 17:19:45 crc kubenswrapper[4858]: I0202 17:19:45.748189 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 02 17:19:45 crc kubenswrapper[4858]: I0202 17:19:45.777302 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 02 17:19:45 crc kubenswrapper[4858]: I0202 17:19:45.908082 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 02 17:19:45 crc kubenswrapper[4858]: I0202 17:19:45.984460 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 17:19:46 crc kubenswrapper[4858]: I0202 17:19:46.024877 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 02 17:19:46 crc kubenswrapper[4858]: I0202 17:19:46.059827 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 02 17:19:46 crc kubenswrapper[4858]: I0202 17:19:46.165888 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 02 17:19:46 crc kubenswrapper[4858]: I0202 17:19:46.248739 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 02 17:19:46 crc kubenswrapper[4858]: I0202 17:19:46.248745 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 02 17:19:46 crc kubenswrapper[4858]: I0202 17:19:46.287147 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 02 17:19:46 crc kubenswrapper[4858]: I0202 17:19:46.401155 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 02 17:19:46 crc kubenswrapper[4858]: I0202 17:19:46.406251 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 02 17:19:46 crc kubenswrapper[4858]: I0202 17:19:46.548579 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 02 17:19:46 crc kubenswrapper[4858]: I0202 17:19:46.602872 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 02 17:19:46 crc kubenswrapper[4858]: I0202 17:19:46.603697 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 02 17:19:46 crc kubenswrapper[4858]: I0202 17:19:46.641754 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 02 17:19:46 crc kubenswrapper[4858]: I0202 17:19:46.679041 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 02 17:19:46 crc kubenswrapper[4858]: I0202 17:19:46.704122 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 02 17:19:46 crc kubenswrapper[4858]: I0202 17:19:46.744082 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 02 17:19:46 crc kubenswrapper[4858]: I0202 17:19:46.873570 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 02 17:19:46 crc kubenswrapper[4858]: I0202 17:19:46.902644 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 02 17:19:47 crc kubenswrapper[4858]: I0202 17:19:47.014833 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 02 17:19:47 crc kubenswrapper[4858]: I0202 17:19:47.032653 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 02 17:19:47 crc kubenswrapper[4858]: I0202 17:19:47.039875 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 02 17:19:47 crc kubenswrapper[4858]: I0202 17:19:47.065303 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 02 17:19:47 crc kubenswrapper[4858]: I0202 17:19:47.081940 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 02 17:19:47 crc kubenswrapper[4858]: I0202 17:19:47.155932 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 02 17:19:47 crc kubenswrapper[4858]: I0202 17:19:47.167555 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 02 17:19:47 crc kubenswrapper[4858]: I0202 17:19:47.217798 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 02 17:19:47 crc kubenswrapper[4858]: I0202 17:19:47.234583 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 02 17:19:47 crc kubenswrapper[4858]: I0202 17:19:47.275906 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 02 17:19:47 crc kubenswrapper[4858]: I0202 17:19:47.371025 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 02 17:19:47 crc kubenswrapper[4858]: I0202 17:19:47.380355 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 02 17:19:47 crc kubenswrapper[4858]: I0202 17:19:47.392357 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 02 17:19:47 crc kubenswrapper[4858]: I0202 17:19:47.461688 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 02 17:19:47 crc kubenswrapper[4858]: I0202 17:19:47.533327 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 02 17:19:47 crc kubenswrapper[4858]: I0202 17:19:47.535641 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 02 17:19:47 crc kubenswrapper[4858]: I0202 17:19:47.546183 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 17:19:47 crc kubenswrapper[4858]: I0202 17:19:47.579033 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 02 17:19:47 crc kubenswrapper[4858]: I0202 17:19:47.603322 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 02 17:19:47 crc kubenswrapper[4858]: I0202 17:19:47.613296 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 02 17:19:47 crc kubenswrapper[4858]: I0202 17:19:47.613545 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 02 17:19:47 crc kubenswrapper[4858]: I0202 17:19:47.645659 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 17:19:47 crc kubenswrapper[4858]: I0202 17:19:47.675538 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 02 17:19:47 crc kubenswrapper[4858]: I0202 17:19:47.756892 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 02 17:19:47 crc kubenswrapper[4858]: I0202 17:19:47.790803 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 02 17:19:47 crc kubenswrapper[4858]: I0202 17:19:47.872547 4858 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 02 17:19:47 crc kubenswrapper[4858]: I0202 17:19:47.872635 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 02 17:19:47 crc kubenswrapper[4858]: I0202 17:19:47.872707 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 17:19:47 crc kubenswrapper[4858]: I0202 17:19:47.873688 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"9306be8b1142988ab2bc00a55929a220c03f42afa3abec4812949d2eaf6eb638"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 02 17:19:47 crc kubenswrapper[4858]: I0202 17:19:47.874082 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://9306be8b1142988ab2bc00a55929a220c03f42afa3abec4812949d2eaf6eb638" gracePeriod=30 Feb 02 17:19:47 crc kubenswrapper[4858]: I0202 17:19:47.909774 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 02 17:19:48 crc kubenswrapper[4858]: I0202 17:19:48.012119 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 02 17:19:48 crc kubenswrapper[4858]: I0202 17:19:48.041120 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 02 17:19:48 crc kubenswrapper[4858]: I0202 17:19:48.066257 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 02 17:19:48 crc kubenswrapper[4858]: I0202 17:19:48.208059 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 02 17:19:48 crc kubenswrapper[4858]: I0202 17:19:48.222022 4858 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 02 17:19:48 crc kubenswrapper[4858]: I0202 17:19:48.339396 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 02 17:19:48 crc kubenswrapper[4858]: I0202 17:19:48.341285 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 02 17:19:48 crc kubenswrapper[4858]: I0202 17:19:48.358254 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 02 17:19:48 crc kubenswrapper[4858]: I0202 17:19:48.475876 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 02 17:19:48 crc kubenswrapper[4858]: I0202 17:19:48.502753 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 02 17:19:48 crc kubenswrapper[4858]: I0202 17:19:48.586546 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 02 17:19:48 crc kubenswrapper[4858]: I0202 17:19:48.636352 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 02 17:19:48 crc kubenswrapper[4858]: I0202 17:19:48.647672 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 02 17:19:48 crc kubenswrapper[4858]: I0202 17:19:48.653538 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 02 17:19:48 crc kubenswrapper[4858]: I0202 17:19:48.679693 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 02 17:19:48 crc kubenswrapper[4858]: I0202 17:19:48.710861 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 02 17:19:48 crc kubenswrapper[4858]: I0202 17:19:48.729885 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 02 17:19:48 crc kubenswrapper[4858]: I0202 17:19:48.742327 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 02 17:19:48 crc kubenswrapper[4858]: I0202 17:19:48.826070 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 02 17:19:48 crc kubenswrapper[4858]: I0202 17:19:48.987524 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 02 17:19:49 crc kubenswrapper[4858]: I0202 17:19:49.021049 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 02 17:19:49 crc kubenswrapper[4858]: I0202 17:19:49.090163 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 02 17:19:49 crc kubenswrapper[4858]: I0202 17:19:49.098059 4858 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 02 17:19:49 crc kubenswrapper[4858]: I0202 17:19:49.104341 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 17:19:49 crc kubenswrapper[4858]: I0202 17:19:49.104402 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 17:19:49 crc kubenswrapper[4858]: I0202 17:19:49.111316 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 17:19:49 crc kubenswrapper[4858]: I0202 17:19:49.130238 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=24.130217039 podStartE2EDuration="24.130217039s" podCreationTimestamp="2026-02-02 17:19:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:19:49.127227518 +0000 UTC m=+290.279642793" watchObservedRunningTime="2026-02-02 17:19:49.130217039 +0000 UTC m=+290.282632324" Feb 02 17:19:49 crc kubenswrapper[4858]: I0202 17:19:49.191449 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 02 17:19:49 crc kubenswrapper[4858]: I0202 17:19:49.289705 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 02 17:19:49 crc kubenswrapper[4858]: I0202 17:19:49.340757 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 02 17:19:49 crc kubenswrapper[4858]: I0202 17:19:49.348240 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 02 17:19:49 crc kubenswrapper[4858]: I0202 17:19:49.357307 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 02 17:19:49 crc kubenswrapper[4858]: I0202 17:19:49.401214 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 02 17:19:49 crc kubenswrapper[4858]: I0202 17:19:49.517581 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 02 17:19:49 crc kubenswrapper[4858]: I0202 17:19:49.588002 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7x482"] Feb 02 17:19:49 crc kubenswrapper[4858]: I0202 17:19:49.588501 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7x482" podUID="69eb2d24-ee9f-4ef2-8bf0-233099196e0d" containerName="registry-server" containerID="cri-o://3ee37aa884ece2c00bc0f392fbdbd0178303c1b150463395ed72eb5277feaed5" gracePeriod=30 Feb 02 17:19:49 crc kubenswrapper[4858]: I0202 17:19:49.596254 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qtpsf"] Feb 02 17:19:49 crc kubenswrapper[4858]: I0202 17:19:49.597031 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qtpsf" podUID="9b4f9546-2d15-4925-aba0-40e3b10098a0" containerName="registry-server" containerID="cri-o://2d0339f0eb4c03d67e98e4f24394913ebad0d57c561976de80c1ae12dde39132" gracePeriod=30 Feb 02 17:19:49 crc kubenswrapper[4858]: I0202 17:19:49.604227 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nd9bb"] Feb 02 17:19:49 crc kubenswrapper[4858]: I0202 17:19:49.604627 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-nd9bb" podUID="89d9c9f7-5f7c-4cc0-add0-bd38785c308e" containerName="marketplace-operator" containerID="cri-o://32ef38fdee33da228060ef7ba5027031471564e5ad48791d60f8595e4e66a3e1" gracePeriod=30 Feb 02 17:19:49 crc kubenswrapper[4858]: I0202 17:19:49.608138 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vmzkp"] Feb 02 17:19:49 crc kubenswrapper[4858]: I0202 17:19:49.608443 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vmzkp" podUID="a32894ac-052e-4a93-a3d1-79aeec5b8869" containerName="registry-server" containerID="cri-o://ede21328d6819dde3686a8a8ef6fe7d48966ad97888d39ea8220955d6f5de202" gracePeriod=30 Feb 02 17:19:49 crc kubenswrapper[4858]: I0202 17:19:49.627779 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c9zvf"] Feb 02 17:19:49 crc kubenswrapper[4858]: I0202 17:19:49.628152 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-c9zvf" podUID="f1040c7c-84e3-41c7-9484-13022fbcef4b" containerName="registry-server" containerID="cri-o://96bec0ad2a1fab1d3ec2d56324748f74773965ae489a2735e53b3befd910c831" gracePeriod=30 Feb 02 17:19:49 crc kubenswrapper[4858]: I0202 17:19:49.722927 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 17:19:49 crc kubenswrapper[4858]: I0202 17:19:49.774676 4858 generic.go:334] "Generic (PLEG): container finished" podID="69eb2d24-ee9f-4ef2-8bf0-233099196e0d" containerID="3ee37aa884ece2c00bc0f392fbdbd0178303c1b150463395ed72eb5277feaed5" exitCode=0 Feb 02 17:19:49 crc kubenswrapper[4858]: I0202 17:19:49.774788 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7x482" event={"ID":"69eb2d24-ee9f-4ef2-8bf0-233099196e0d","Type":"ContainerDied","Data":"3ee37aa884ece2c00bc0f392fbdbd0178303c1b150463395ed72eb5277feaed5"} Feb 02 17:19:49 crc kubenswrapper[4858]: I0202 17:19:49.782260 4858 generic.go:334] "Generic (PLEG): container finished" podID="a32894ac-052e-4a93-a3d1-79aeec5b8869" containerID="ede21328d6819dde3686a8a8ef6fe7d48966ad97888d39ea8220955d6f5de202" exitCode=0 Feb 02 17:19:49 crc kubenswrapper[4858]: I0202 17:19:49.782377 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmzkp" event={"ID":"a32894ac-052e-4a93-a3d1-79aeec5b8869","Type":"ContainerDied","Data":"ede21328d6819dde3686a8a8ef6fe7d48966ad97888d39ea8220955d6f5de202"} Feb 02 17:19:49 crc kubenswrapper[4858]: I0202 17:19:49.784245 4858 generic.go:334] "Generic (PLEG): container finished" podID="89d9c9f7-5f7c-4cc0-add0-bd38785c308e" containerID="32ef38fdee33da228060ef7ba5027031471564e5ad48791d60f8595e4e66a3e1" exitCode=0 Feb 02 17:19:49 crc kubenswrapper[4858]: I0202 17:19:49.784295 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nd9bb" event={"ID":"89d9c9f7-5f7c-4cc0-add0-bd38785c308e","Type":"ContainerDied","Data":"32ef38fdee33da228060ef7ba5027031471564e5ad48791d60f8595e4e66a3e1"} Feb 02 17:19:49 crc kubenswrapper[4858]: I0202 17:19:49.789373 4858 generic.go:334] "Generic (PLEG): container finished" podID="f1040c7c-84e3-41c7-9484-13022fbcef4b" containerID="96bec0ad2a1fab1d3ec2d56324748f74773965ae489a2735e53b3befd910c831" exitCode=0 Feb 02 17:19:49 crc kubenswrapper[4858]: I0202 17:19:49.789472 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c9zvf" event={"ID":"f1040c7c-84e3-41c7-9484-13022fbcef4b","Type":"ContainerDied","Data":"96bec0ad2a1fab1d3ec2d56324748f74773965ae489a2735e53b3befd910c831"} Feb 02 17:19:49 crc kubenswrapper[4858]: I0202 17:19:49.791081 4858 generic.go:334] "Generic (PLEG): container finished" podID="9b4f9546-2d15-4925-aba0-40e3b10098a0" containerID="2d0339f0eb4c03d67e98e4f24394913ebad0d57c561976de80c1ae12dde39132" exitCode=0 Feb 02 17:19:49 crc kubenswrapper[4858]: I0202 17:19:49.791167 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qtpsf" event={"ID":"9b4f9546-2d15-4925-aba0-40e3b10098a0","Type":"ContainerDied","Data":"2d0339f0eb4c03d67e98e4f24394913ebad0d57c561976de80c1ae12dde39132"} Feb 02 17:19:49 crc kubenswrapper[4858]: E0202 17:19:49.904737 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ede21328d6819dde3686a8a8ef6fe7d48966ad97888d39ea8220955d6f5de202 is running failed: container process not found" containerID="ede21328d6819dde3686a8a8ef6fe7d48966ad97888d39ea8220955d6f5de202" cmd=["grpc_health_probe","-addr=:50051"] Feb 02 17:19:49 crc kubenswrapper[4858]: E0202 17:19:49.905861 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ede21328d6819dde3686a8a8ef6fe7d48966ad97888d39ea8220955d6f5de202 is running failed: container process not found" containerID="ede21328d6819dde3686a8a8ef6fe7d48966ad97888d39ea8220955d6f5de202" cmd=["grpc_health_probe","-addr=:50051"] Feb 02 17:19:49 crc kubenswrapper[4858]: E0202 17:19:49.906586 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ede21328d6819dde3686a8a8ef6fe7d48966ad97888d39ea8220955d6f5de202 is running failed: container process not found" containerID="ede21328d6819dde3686a8a8ef6fe7d48966ad97888d39ea8220955d6f5de202" cmd=["grpc_health_probe","-addr=:50051"] Feb 02 17:19:49 crc kubenswrapper[4858]: E0202 17:19:49.906697 4858 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ede21328d6819dde3686a8a8ef6fe7d48966ad97888d39ea8220955d6f5de202 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-vmzkp" podUID="a32894ac-052e-4a93-a3d1-79aeec5b8869" containerName="registry-server" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.080817 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7x482" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.112238 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.118501 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.180723 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69eb2d24-ee9f-4ef2-8bf0-233099196e0d-catalog-content\") pod \"69eb2d24-ee9f-4ef2-8bf0-233099196e0d\" (UID: \"69eb2d24-ee9f-4ef2-8bf0-233099196e0d\") " Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.180789 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dmmc\" (UniqueName: \"kubernetes.io/projected/69eb2d24-ee9f-4ef2-8bf0-233099196e0d-kube-api-access-7dmmc\") pod \"69eb2d24-ee9f-4ef2-8bf0-233099196e0d\" (UID: \"69eb2d24-ee9f-4ef2-8bf0-233099196e0d\") " Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.180819 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69eb2d24-ee9f-4ef2-8bf0-233099196e0d-utilities\") pod \"69eb2d24-ee9f-4ef2-8bf0-233099196e0d\" (UID: \"69eb2d24-ee9f-4ef2-8bf0-233099196e0d\") " Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.181652 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69eb2d24-ee9f-4ef2-8bf0-233099196e0d-utilities" (OuterVolumeSpecName: "utilities") pod "69eb2d24-ee9f-4ef2-8bf0-233099196e0d" (UID: "69eb2d24-ee9f-4ef2-8bf0-233099196e0d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.186221 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69eb2d24-ee9f-4ef2-8bf0-233099196e0d-kube-api-access-7dmmc" (OuterVolumeSpecName: "kube-api-access-7dmmc") pod "69eb2d24-ee9f-4ef2-8bf0-233099196e0d" (UID: "69eb2d24-ee9f-4ef2-8bf0-233099196e0d"). InnerVolumeSpecName "kube-api-access-7dmmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.187312 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nd9bb" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.192364 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vmzkp" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.196316 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c9zvf" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.223490 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69eb2d24-ee9f-4ef2-8bf0-233099196e0d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "69eb2d24-ee9f-4ef2-8bf0-233099196e0d" (UID: "69eb2d24-ee9f-4ef2-8bf0-233099196e0d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.228132 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.259772 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qtpsf" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.282327 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69eb2d24-ee9f-4ef2-8bf0-233099196e0d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.282370 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dmmc\" (UniqueName: \"kubernetes.io/projected/69eb2d24-ee9f-4ef2-8bf0-233099196e0d-kube-api-access-7dmmc\") on node \"crc\" DevicePath \"\"" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.282413 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69eb2d24-ee9f-4ef2-8bf0-233099196e0d-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.383390 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a32894ac-052e-4a93-a3d1-79aeec5b8869-catalog-content\") pod \"a32894ac-052e-4a93-a3d1-79aeec5b8869\" (UID: \"a32894ac-052e-4a93-a3d1-79aeec5b8869\") " Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.383448 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/89d9c9f7-5f7c-4cc0-add0-bd38785c308e-marketplace-operator-metrics\") pod \"89d9c9f7-5f7c-4cc0-add0-bd38785c308e\" (UID: \"89d9c9f7-5f7c-4cc0-add0-bd38785c308e\") " Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.383479 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9knkc\" (UniqueName: \"kubernetes.io/projected/a32894ac-052e-4a93-a3d1-79aeec5b8869-kube-api-access-9knkc\") pod \"a32894ac-052e-4a93-a3d1-79aeec5b8869\" (UID: \"a32894ac-052e-4a93-a3d1-79aeec5b8869\") " Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.383495 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b4f9546-2d15-4925-aba0-40e3b10098a0-utilities\") pod \"9b4f9546-2d15-4925-aba0-40e3b10098a0\" (UID: \"9b4f9546-2d15-4925-aba0-40e3b10098a0\") " Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.383517 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2j989\" (UniqueName: \"kubernetes.io/projected/f1040c7c-84e3-41c7-9484-13022fbcef4b-kube-api-access-2j989\") pod \"f1040c7c-84e3-41c7-9484-13022fbcef4b\" (UID: \"f1040c7c-84e3-41c7-9484-13022fbcef4b\") " Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.383536 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s69g4\" (UniqueName: \"kubernetes.io/projected/89d9c9f7-5f7c-4cc0-add0-bd38785c308e-kube-api-access-s69g4\") pod \"89d9c9f7-5f7c-4cc0-add0-bd38785c308e\" (UID: \"89d9c9f7-5f7c-4cc0-add0-bd38785c308e\") " Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.383560 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1040c7c-84e3-41c7-9484-13022fbcef4b-utilities\") pod \"f1040c7c-84e3-41c7-9484-13022fbcef4b\" (UID: \"f1040c7c-84e3-41c7-9484-13022fbcef4b\") " Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.383613 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvvxl\" (UniqueName: \"kubernetes.io/projected/9b4f9546-2d15-4925-aba0-40e3b10098a0-kube-api-access-wvvxl\") pod \"9b4f9546-2d15-4925-aba0-40e3b10098a0\" (UID: \"9b4f9546-2d15-4925-aba0-40e3b10098a0\") " Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.383650 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89d9c9f7-5f7c-4cc0-add0-bd38785c308e-marketplace-trusted-ca\") pod \"89d9c9f7-5f7c-4cc0-add0-bd38785c308e\" (UID: \"89d9c9f7-5f7c-4cc0-add0-bd38785c308e\") " Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.383677 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b4f9546-2d15-4925-aba0-40e3b10098a0-catalog-content\") pod \"9b4f9546-2d15-4925-aba0-40e3b10098a0\" (UID: \"9b4f9546-2d15-4925-aba0-40e3b10098a0\") " Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.384204 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a32894ac-052e-4a93-a3d1-79aeec5b8869-utilities\") pod \"a32894ac-052e-4a93-a3d1-79aeec5b8869\" (UID: \"a32894ac-052e-4a93-a3d1-79aeec5b8869\") " Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.384231 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1040c7c-84e3-41c7-9484-13022fbcef4b-catalog-content\") pod \"f1040c7c-84e3-41c7-9484-13022fbcef4b\" (UID: \"f1040c7c-84e3-41c7-9484-13022fbcef4b\") " Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.384317 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b4f9546-2d15-4925-aba0-40e3b10098a0-utilities" (OuterVolumeSpecName: "utilities") pod "9b4f9546-2d15-4925-aba0-40e3b10098a0" (UID: "9b4f9546-2d15-4925-aba0-40e3b10098a0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.384944 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1040c7c-84e3-41c7-9484-13022fbcef4b-utilities" (OuterVolumeSpecName: "utilities") pod "f1040c7c-84e3-41c7-9484-13022fbcef4b" (UID: "f1040c7c-84e3-41c7-9484-13022fbcef4b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.385129 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a32894ac-052e-4a93-a3d1-79aeec5b8869-utilities" (OuterVolumeSpecName: "utilities") pod "a32894ac-052e-4a93-a3d1-79aeec5b8869" (UID: "a32894ac-052e-4a93-a3d1-79aeec5b8869"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.385139 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89d9c9f7-5f7c-4cc0-add0-bd38785c308e-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "89d9c9f7-5f7c-4cc0-add0-bd38785c308e" (UID: "89d9c9f7-5f7c-4cc0-add0-bd38785c308e"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.385319 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b4f9546-2d15-4925-aba0-40e3b10098a0-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.385344 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1040c7c-84e3-41c7-9484-13022fbcef4b-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.385357 4858 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89d9c9f7-5f7c-4cc0-add0-bd38785c308e-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.385370 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a32894ac-052e-4a93-a3d1-79aeec5b8869-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.386279 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1040c7c-84e3-41c7-9484-13022fbcef4b-kube-api-access-2j989" (OuterVolumeSpecName: "kube-api-access-2j989") pod "f1040c7c-84e3-41c7-9484-13022fbcef4b" (UID: "f1040c7c-84e3-41c7-9484-13022fbcef4b"). InnerVolumeSpecName "kube-api-access-2j989". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.386570 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89d9c9f7-5f7c-4cc0-add0-bd38785c308e-kube-api-access-s69g4" (OuterVolumeSpecName: "kube-api-access-s69g4") pod "89d9c9f7-5f7c-4cc0-add0-bd38785c308e" (UID: "89d9c9f7-5f7c-4cc0-add0-bd38785c308e"). InnerVolumeSpecName "kube-api-access-s69g4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.387664 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b4f9546-2d15-4925-aba0-40e3b10098a0-kube-api-access-wvvxl" (OuterVolumeSpecName: "kube-api-access-wvvxl") pod "9b4f9546-2d15-4925-aba0-40e3b10098a0" (UID: "9b4f9546-2d15-4925-aba0-40e3b10098a0"). InnerVolumeSpecName "kube-api-access-wvvxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.388522 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a32894ac-052e-4a93-a3d1-79aeec5b8869-kube-api-access-9knkc" (OuterVolumeSpecName: "kube-api-access-9knkc") pod "a32894ac-052e-4a93-a3d1-79aeec5b8869" (UID: "a32894ac-052e-4a93-a3d1-79aeec5b8869"). InnerVolumeSpecName "kube-api-access-9knkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.389304 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89d9c9f7-5f7c-4cc0-add0-bd38785c308e-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "89d9c9f7-5f7c-4cc0-add0-bd38785c308e" (UID: "89d9c9f7-5f7c-4cc0-add0-bd38785c308e"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.420604 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a32894ac-052e-4a93-a3d1-79aeec5b8869-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a32894ac-052e-4a93-a3d1-79aeec5b8869" (UID: "a32894ac-052e-4a93-a3d1-79aeec5b8869"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.442763 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b4f9546-2d15-4925-aba0-40e3b10098a0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9b4f9546-2d15-4925-aba0-40e3b10098a0" (UID: "9b4f9546-2d15-4925-aba0-40e3b10098a0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.450760 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.486711 4858 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/89d9c9f7-5f7c-4cc0-add0-bd38785c308e-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.486750 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9knkc\" (UniqueName: \"kubernetes.io/projected/a32894ac-052e-4a93-a3d1-79aeec5b8869-kube-api-access-9knkc\") on node \"crc\" DevicePath \"\"" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.486768 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s69g4\" (UniqueName: \"kubernetes.io/projected/89d9c9f7-5f7c-4cc0-add0-bd38785c308e-kube-api-access-s69g4\") on node \"crc\" DevicePath \"\"" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.486785 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2j989\" (UniqueName: \"kubernetes.io/projected/f1040c7c-84e3-41c7-9484-13022fbcef4b-kube-api-access-2j989\") on node \"crc\" DevicePath \"\"" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.486803 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvvxl\" (UniqueName: \"kubernetes.io/projected/9b4f9546-2d15-4925-aba0-40e3b10098a0-kube-api-access-wvvxl\") on node \"crc\" DevicePath \"\"" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.486820 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b4f9546-2d15-4925-aba0-40e3b10098a0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.486832 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a32894ac-052e-4a93-a3d1-79aeec5b8869-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.514366 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.530334 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1040c7c-84e3-41c7-9484-13022fbcef4b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f1040c7c-84e3-41c7-9484-13022fbcef4b" (UID: "f1040c7c-84e3-41c7-9484-13022fbcef4b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.546899 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.587538 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1040c7c-84e3-41c7-9484-13022fbcef4b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.614939 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.649632 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.667764 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.696250 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.761490 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.799298 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmzkp" event={"ID":"a32894ac-052e-4a93-a3d1-79aeec5b8869","Type":"ContainerDied","Data":"91169b3707147c80579305e556fb88fc2acf58b2a6355cc9184a56784856841d"} Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.799362 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vmzkp" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.800213 4858 scope.go:117] "RemoveContainer" containerID="ede21328d6819dde3686a8a8ef6fe7d48966ad97888d39ea8220955d6f5de202" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.803534 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7x482" event={"ID":"69eb2d24-ee9f-4ef2-8bf0-233099196e0d","Type":"ContainerDied","Data":"49ca3ab742df3cd8426ab89c6ab6f5c9a0c439ae6c64e6d3b1901bdd33bea0e7"} Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.803658 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7x482" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.808512 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nd9bb" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.808542 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nd9bb" event={"ID":"89d9c9f7-5f7c-4cc0-add0-bd38785c308e","Type":"ContainerDied","Data":"9af29f0fdae21c6decc648e52100a2f72b468c131ab8987859359f1a30032970"} Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.816043 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c9zvf" event={"ID":"f1040c7c-84e3-41c7-9484-13022fbcef4b","Type":"ContainerDied","Data":"b0911d89658e2a108503c219e4e892523fd3f339737112acd80093bb9cd61c12"} Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.816183 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c9zvf" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.821598 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qtpsf" event={"ID":"9b4f9546-2d15-4925-aba0-40e3b10098a0","Type":"ContainerDied","Data":"f1e68fa4c1bfa59c96af8c6de43552c3749077f455e8135c6ac1c224fe4a91c5"} Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.821683 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qtpsf" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.836917 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7x482"] Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.844012 4858 scope.go:117] "RemoveContainer" containerID="c6a4b0376b8073701c512b55a646248ed5c7e442a4aae731a32fa5fc635c50ab" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.844091 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7x482"] Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.855715 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nd9bb"] Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.862072 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nd9bb"] Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.869496 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vmzkp"] Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.874810 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vmzkp"] Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.883329 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qtpsf"] Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.892864 4858 scope.go:117] "RemoveContainer" containerID="49731b21b282f39e584bb545fe27b0c4a395a5f79914425e3e4df4065920c229" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.893770 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qtpsf"] Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.896664 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c9zvf"] Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.899217 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-c9zvf"] Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.912370 4858 scope.go:117] "RemoveContainer" containerID="3ee37aa884ece2c00bc0f392fbdbd0178303c1b150463395ed72eb5277feaed5" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.915837 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.927726 4858 scope.go:117] "RemoveContainer" containerID="138846b9e2a32cf9e6ef2c0f35a9d4b7da1ebc8c33fcc4be0d7966c80a6f88fc" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.947289 4858 scope.go:117] "RemoveContainer" containerID="be20214a6c5d05ca3d39d99b54c35104c3b89fd6cd0cb11c5130f46607285074" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.963186 4858 scope.go:117] "RemoveContainer" containerID="32ef38fdee33da228060ef7ba5027031471564e5ad48791d60f8595e4e66a3e1" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.978610 4858 scope.go:117] "RemoveContainer" containerID="96bec0ad2a1fab1d3ec2d56324748f74773965ae489a2735e53b3befd910c831" Feb 02 17:19:50 crc kubenswrapper[4858]: I0202 17:19:50.991108 4858 scope.go:117] "RemoveContainer" containerID="eba582836dbbd37917c2d38d7a4e786aff64e2a62fc54fdc2347777c88955bc7" Feb 02 17:19:51 crc kubenswrapper[4858]: I0202 17:19:51.005785 4858 scope.go:117] "RemoveContainer" containerID="75b856138fabfbf84faab7daa1e48276bd7ee52850621b24c8bb579c549a6fb4" Feb 02 17:19:51 crc kubenswrapper[4858]: I0202 17:19:51.017553 4858 scope.go:117] "RemoveContainer" containerID="2d0339f0eb4c03d67e98e4f24394913ebad0d57c561976de80c1ae12dde39132" Feb 02 17:19:51 crc kubenswrapper[4858]: I0202 17:19:51.031728 4858 scope.go:117] "RemoveContainer" containerID="12079498b645c1087925f54f14c6d5688fd2a9c359e4bbc859348ebc245b5df7" Feb 02 17:19:51 crc kubenswrapper[4858]: I0202 17:19:51.044929 4858 scope.go:117] "RemoveContainer" containerID="8dd825bdd6147d4e0f0ace09def7945abbca9ad5bef70bcb2e0492534732fe8d" Feb 02 17:19:51 crc kubenswrapper[4858]: I0202 17:19:51.046714 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 02 17:19:51 crc kubenswrapper[4858]: I0202 17:19:51.173397 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 02 17:19:51 crc kubenswrapper[4858]: I0202 17:19:51.273668 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 02 17:19:51 crc kubenswrapper[4858]: I0202 17:19:51.293814 4858 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 02 17:19:51 crc kubenswrapper[4858]: I0202 17:19:51.430777 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 02 17:19:51 crc kubenswrapper[4858]: I0202 17:19:51.446672 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 02 17:19:51 crc kubenswrapper[4858]: I0202 17:19:51.486280 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 02 17:19:51 crc kubenswrapper[4858]: I0202 17:19:51.682915 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 02 17:19:51 crc kubenswrapper[4858]: I0202 17:19:51.687609 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 02 17:19:51 crc kubenswrapper[4858]: I0202 17:19:51.867590 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 02 17:19:52 crc kubenswrapper[4858]: I0202 17:19:52.412434 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69eb2d24-ee9f-4ef2-8bf0-233099196e0d" path="/var/lib/kubelet/pods/69eb2d24-ee9f-4ef2-8bf0-233099196e0d/volumes" Feb 02 17:19:52 crc kubenswrapper[4858]: I0202 17:19:52.414169 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89d9c9f7-5f7c-4cc0-add0-bd38785c308e" path="/var/lib/kubelet/pods/89d9c9f7-5f7c-4cc0-add0-bd38785c308e/volumes" Feb 02 17:19:52 crc kubenswrapper[4858]: I0202 17:19:52.415172 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b4f9546-2d15-4925-aba0-40e3b10098a0" path="/var/lib/kubelet/pods/9b4f9546-2d15-4925-aba0-40e3b10098a0/volumes" Feb 02 17:19:52 crc kubenswrapper[4858]: I0202 17:19:52.417715 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a32894ac-052e-4a93-a3d1-79aeec5b8869" path="/var/lib/kubelet/pods/a32894ac-052e-4a93-a3d1-79aeec5b8869/volumes" Feb 02 17:19:52 crc kubenswrapper[4858]: I0202 17:19:52.419415 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1040c7c-84e3-41c7-9484-13022fbcef4b" path="/var/lib/kubelet/pods/f1040c7c-84e3-41c7-9484-13022fbcef4b/volumes" Feb 02 17:19:58 crc kubenswrapper[4858]: I0202 17:19:58.341321 4858 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 17:19:58 crc kubenswrapper[4858]: I0202 17:19:58.342400 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://9e5faa8ff18d17e744c69079f73c90e02264905a93efd6f32f18a56aef774107" gracePeriod=5 Feb 02 17:20:00 crc kubenswrapper[4858]: I0202 17:20:00.192258 4858 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 02 17:20:03 crc kubenswrapper[4858]: I0202 17:20:03.907559 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 02 17:20:03 crc kubenswrapper[4858]: I0202 17:20:03.909097 4858 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="9e5faa8ff18d17e744c69079f73c90e02264905a93efd6f32f18a56aef774107" exitCode=137 Feb 02 17:20:03 crc kubenswrapper[4858]: I0202 17:20:03.909164 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cc04687642f51e4e809f0a09067e6d45b83709cad8aa84d220142c66d6c7333" Feb 02 17:20:03 crc kubenswrapper[4858]: I0202 17:20:03.932310 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 02 17:20:03 crc kubenswrapper[4858]: I0202 17:20:03.932377 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 17:20:04 crc kubenswrapper[4858]: I0202 17:20:04.100666 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 17:20:04 crc kubenswrapper[4858]: I0202 17:20:04.100717 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 17:20:04 crc kubenswrapper[4858]: I0202 17:20:04.100743 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 17:20:04 crc kubenswrapper[4858]: I0202 17:20:04.100762 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 17:20:04 crc kubenswrapper[4858]: I0202 17:20:04.100810 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 17:20:04 crc kubenswrapper[4858]: I0202 17:20:04.101041 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 17:20:04 crc kubenswrapper[4858]: I0202 17:20:04.101070 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 17:20:04 crc kubenswrapper[4858]: I0202 17:20:04.101089 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 17:20:04 crc kubenswrapper[4858]: I0202 17:20:04.101105 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 17:20:04 crc kubenswrapper[4858]: I0202 17:20:04.115510 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 17:20:04 crc kubenswrapper[4858]: I0202 17:20:04.202596 4858 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 17:20:04 crc kubenswrapper[4858]: I0202 17:20:04.202651 4858 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 02 17:20:04 crc kubenswrapper[4858]: I0202 17:20:04.202670 4858 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 02 17:20:04 crc kubenswrapper[4858]: I0202 17:20:04.202691 4858 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 17:20:04 crc kubenswrapper[4858]: I0202 17:20:04.202712 4858 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 02 17:20:04 crc kubenswrapper[4858]: I0202 17:20:04.407594 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 02 17:20:04 crc kubenswrapper[4858]: I0202 17:20:04.913509 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 17:20:08 crc kubenswrapper[4858]: I0202 17:20:08.206643 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 02 17:20:17 crc kubenswrapper[4858]: I0202 17:20:17.993416 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 02 17:20:17 crc kubenswrapper[4858]: I0202 17:20:17.998175 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 02 17:20:17 crc kubenswrapper[4858]: I0202 17:20:17.998629 4858 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="9306be8b1142988ab2bc00a55929a220c03f42afa3abec4812949d2eaf6eb638" exitCode=137 Feb 02 17:20:17 crc kubenswrapper[4858]: I0202 17:20:17.999376 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"9306be8b1142988ab2bc00a55929a220c03f42afa3abec4812949d2eaf6eb638"} Feb 02 17:20:17 crc kubenswrapper[4858]: I0202 17:20:17.999439 4858 scope.go:117] "RemoveContainer" containerID="8e66163730ccc2a9da1cf22e263e1e1034002c0329314665a100d39c850162b7" Feb 02 17:20:18 crc kubenswrapper[4858]: I0202 17:20:18.964580 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 02 17:20:19 crc kubenswrapper[4858]: I0202 17:20:19.007761 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 02 17:20:19 crc kubenswrapper[4858]: I0202 17:20:19.008988 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bce21e17e04ab80531fde3f3b0eb20226224d708030c030a8f41c16cba05ba07"} Feb 02 17:20:27 crc kubenswrapper[4858]: I0202 17:20:27.872537 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 17:20:27 crc kubenswrapper[4858]: I0202 17:20:27.880510 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 17:20:28 crc kubenswrapper[4858]: I0202 17:20:28.063637 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 17:20:28 crc kubenswrapper[4858]: I0202 17:20:28.070284 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 17:20:35 crc kubenswrapper[4858]: I0202 17:20:35.574073 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g7r5b"] Feb 02 17:20:35 crc kubenswrapper[4858]: E0202 17:20:35.574874 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89d9c9f7-5f7c-4cc0-add0-bd38785c308e" containerName="marketplace-operator" Feb 02 17:20:35 crc kubenswrapper[4858]: I0202 17:20:35.574890 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="89d9c9f7-5f7c-4cc0-add0-bd38785c308e" containerName="marketplace-operator" Feb 02 17:20:35 crc kubenswrapper[4858]: E0202 17:20:35.574902 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b4f9546-2d15-4925-aba0-40e3b10098a0" containerName="extract-content" Feb 02 17:20:35 crc kubenswrapper[4858]: I0202 17:20:35.574909 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b4f9546-2d15-4925-aba0-40e3b10098a0" containerName="extract-content" Feb 02 17:20:35 crc kubenswrapper[4858]: E0202 17:20:35.574920 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69eb2d24-ee9f-4ef2-8bf0-233099196e0d" containerName="extract-content" Feb 02 17:20:35 crc kubenswrapper[4858]: I0202 17:20:35.574927 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="69eb2d24-ee9f-4ef2-8bf0-233099196e0d" containerName="extract-content" Feb 02 17:20:35 crc kubenswrapper[4858]: E0202 17:20:35.574937 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b4f9546-2d15-4925-aba0-40e3b10098a0" containerName="extract-utilities" Feb 02 17:20:35 crc kubenswrapper[4858]: I0202 17:20:35.574943 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b4f9546-2d15-4925-aba0-40e3b10098a0" containerName="extract-utilities" Feb 02 17:20:35 crc kubenswrapper[4858]: E0202 17:20:35.574954 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1040c7c-84e3-41c7-9484-13022fbcef4b" containerName="registry-server" Feb 02 17:20:35 crc kubenswrapper[4858]: I0202 17:20:35.574962 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1040c7c-84e3-41c7-9484-13022fbcef4b" containerName="registry-server" Feb 02 17:20:35 crc kubenswrapper[4858]: E0202 17:20:35.575004 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1040c7c-84e3-41c7-9484-13022fbcef4b" containerName="extract-content" Feb 02 17:20:35 crc kubenswrapper[4858]: I0202 17:20:35.575013 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1040c7c-84e3-41c7-9484-13022fbcef4b" containerName="extract-content" Feb 02 17:20:35 crc kubenswrapper[4858]: E0202 17:20:35.575025 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a32894ac-052e-4a93-a3d1-79aeec5b8869" containerName="extract-utilities" Feb 02 17:20:35 crc kubenswrapper[4858]: I0202 17:20:35.575032 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a32894ac-052e-4a93-a3d1-79aeec5b8869" containerName="extract-utilities" Feb 02 17:20:35 crc kubenswrapper[4858]: E0202 17:20:35.575041 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b4f9546-2d15-4925-aba0-40e3b10098a0" containerName="registry-server" Feb 02 17:20:35 crc kubenswrapper[4858]: I0202 17:20:35.575048 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b4f9546-2d15-4925-aba0-40e3b10098a0" containerName="registry-server" Feb 02 17:20:35 crc kubenswrapper[4858]: E0202 17:20:35.575057 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85579d4b-0219-4f36-8251-755e28bbe3ba" containerName="installer" Feb 02 17:20:35 crc kubenswrapper[4858]: I0202 17:20:35.575063 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="85579d4b-0219-4f36-8251-755e28bbe3ba" containerName="installer" Feb 02 17:20:35 crc kubenswrapper[4858]: E0202 17:20:35.575075 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69eb2d24-ee9f-4ef2-8bf0-233099196e0d" containerName="registry-server" Feb 02 17:20:35 crc kubenswrapper[4858]: I0202 17:20:35.575082 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="69eb2d24-ee9f-4ef2-8bf0-233099196e0d" containerName="registry-server" Feb 02 17:20:35 crc kubenswrapper[4858]: E0202 17:20:35.575091 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1040c7c-84e3-41c7-9484-13022fbcef4b" containerName="extract-utilities" Feb 02 17:20:35 crc kubenswrapper[4858]: I0202 17:20:35.575099 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1040c7c-84e3-41c7-9484-13022fbcef4b" containerName="extract-utilities" Feb 02 17:20:35 crc kubenswrapper[4858]: E0202 17:20:35.575110 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 02 17:20:35 crc kubenswrapper[4858]: I0202 17:20:35.575116 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 02 17:20:35 crc kubenswrapper[4858]: E0202 17:20:35.575125 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a32894ac-052e-4a93-a3d1-79aeec5b8869" containerName="registry-server" Feb 02 17:20:35 crc kubenswrapper[4858]: I0202 17:20:35.575133 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a32894ac-052e-4a93-a3d1-79aeec5b8869" containerName="registry-server" Feb 02 17:20:35 crc kubenswrapper[4858]: E0202 17:20:35.575143 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69eb2d24-ee9f-4ef2-8bf0-233099196e0d" containerName="extract-utilities" Feb 02 17:20:35 crc kubenswrapper[4858]: I0202 17:20:35.575151 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="69eb2d24-ee9f-4ef2-8bf0-233099196e0d" containerName="extract-utilities" Feb 02 17:20:35 crc kubenswrapper[4858]: E0202 17:20:35.575161 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a32894ac-052e-4a93-a3d1-79aeec5b8869" containerName="extract-content" Feb 02 17:20:35 crc kubenswrapper[4858]: I0202 17:20:35.575169 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a32894ac-052e-4a93-a3d1-79aeec5b8869" containerName="extract-content" Feb 02 17:20:35 crc kubenswrapper[4858]: I0202 17:20:35.575273 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1040c7c-84e3-41c7-9484-13022fbcef4b" containerName="registry-server" Feb 02 17:20:35 crc kubenswrapper[4858]: I0202 17:20:35.575286 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="85579d4b-0219-4f36-8251-755e28bbe3ba" containerName="installer" Feb 02 17:20:35 crc kubenswrapper[4858]: I0202 17:20:35.575294 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b4f9546-2d15-4925-aba0-40e3b10098a0" containerName="registry-server" Feb 02 17:20:35 crc kubenswrapper[4858]: I0202 17:20:35.575305 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 02 17:20:35 crc kubenswrapper[4858]: I0202 17:20:35.575314 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="69eb2d24-ee9f-4ef2-8bf0-233099196e0d" containerName="registry-server" Feb 02 17:20:35 crc kubenswrapper[4858]: I0202 17:20:35.575323 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="89d9c9f7-5f7c-4cc0-add0-bd38785c308e" containerName="marketplace-operator" Feb 02 17:20:35 crc kubenswrapper[4858]: I0202 17:20:35.575335 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a32894ac-052e-4a93-a3d1-79aeec5b8869" containerName="registry-server" Feb 02 17:20:35 crc kubenswrapper[4858]: I0202 17:20:35.575751 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-g7r5b" Feb 02 17:20:35 crc kubenswrapper[4858]: I0202 17:20:35.583865 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 02 17:20:35 crc kubenswrapper[4858]: I0202 17:20:35.586876 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 02 17:20:35 crc kubenswrapper[4858]: I0202 17:20:35.587068 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 02 17:20:35 crc kubenswrapper[4858]: I0202 17:20:35.587246 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 02 17:20:35 crc kubenswrapper[4858]: I0202 17:20:35.592287 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 02 17:20:35 crc kubenswrapper[4858]: I0202 17:20:35.592703 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g7r5b"] Feb 02 17:20:35 crc kubenswrapper[4858]: I0202 17:20:35.742210 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49416635-c370-4a58-aa72-0c1d52fab5f3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-g7r5b\" (UID: \"49416635-c370-4a58-aa72-0c1d52fab5f3\") " pod="openshift-marketplace/marketplace-operator-79b997595-g7r5b" Feb 02 17:20:35 crc kubenswrapper[4858]: I0202 17:20:35.742279 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/49416635-c370-4a58-aa72-0c1d52fab5f3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-g7r5b\" (UID: \"49416635-c370-4a58-aa72-0c1d52fab5f3\") " pod="openshift-marketplace/marketplace-operator-79b997595-g7r5b" Feb 02 17:20:35 crc kubenswrapper[4858]: I0202 17:20:35.742312 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk9g6\" (UniqueName: \"kubernetes.io/projected/49416635-c370-4a58-aa72-0c1d52fab5f3-kube-api-access-kk9g6\") pod \"marketplace-operator-79b997595-g7r5b\" (UID: \"49416635-c370-4a58-aa72-0c1d52fab5f3\") " pod="openshift-marketplace/marketplace-operator-79b997595-g7r5b" Feb 02 17:20:35 crc kubenswrapper[4858]: I0202 17:20:35.843498 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/49416635-c370-4a58-aa72-0c1d52fab5f3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-g7r5b\" (UID: \"49416635-c370-4a58-aa72-0c1d52fab5f3\") " pod="openshift-marketplace/marketplace-operator-79b997595-g7r5b" Feb 02 17:20:35 crc kubenswrapper[4858]: I0202 17:20:35.843592 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk9g6\" (UniqueName: \"kubernetes.io/projected/49416635-c370-4a58-aa72-0c1d52fab5f3-kube-api-access-kk9g6\") pod \"marketplace-operator-79b997595-g7r5b\" (UID: \"49416635-c370-4a58-aa72-0c1d52fab5f3\") " pod="openshift-marketplace/marketplace-operator-79b997595-g7r5b" Feb 02 17:20:35 crc kubenswrapper[4858]: I0202 17:20:35.843726 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49416635-c370-4a58-aa72-0c1d52fab5f3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-g7r5b\" (UID: \"49416635-c370-4a58-aa72-0c1d52fab5f3\") " pod="openshift-marketplace/marketplace-operator-79b997595-g7r5b" Feb 02 17:20:35 crc kubenswrapper[4858]: I0202 17:20:35.845329 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49416635-c370-4a58-aa72-0c1d52fab5f3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-g7r5b\" (UID: \"49416635-c370-4a58-aa72-0c1d52fab5f3\") " pod="openshift-marketplace/marketplace-operator-79b997595-g7r5b" Feb 02 17:20:35 crc kubenswrapper[4858]: I0202 17:20:35.852234 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/49416635-c370-4a58-aa72-0c1d52fab5f3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-g7r5b\" (UID: \"49416635-c370-4a58-aa72-0c1d52fab5f3\") " pod="openshift-marketplace/marketplace-operator-79b997595-g7r5b" Feb 02 17:20:35 crc kubenswrapper[4858]: I0202 17:20:35.883960 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk9g6\" (UniqueName: \"kubernetes.io/projected/49416635-c370-4a58-aa72-0c1d52fab5f3-kube-api-access-kk9g6\") pod \"marketplace-operator-79b997595-g7r5b\" (UID: \"49416635-c370-4a58-aa72-0c1d52fab5f3\") " pod="openshift-marketplace/marketplace-operator-79b997595-g7r5b" Feb 02 17:20:35 crc kubenswrapper[4858]: I0202 17:20:35.892525 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-g7r5b" Feb 02 17:20:36 crc kubenswrapper[4858]: I0202 17:20:36.285331 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g7r5b"] Feb 02 17:20:36 crc kubenswrapper[4858]: W0202 17:20:36.293055 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49416635_c370_4a58_aa72_0c1d52fab5f3.slice/crio-43e7336f4cb67b54556008f704cca65b4908dd9dcb7f799634397032b2e14a3e WatchSource:0}: Error finding container 43e7336f4cb67b54556008f704cca65b4908dd9dcb7f799634397032b2e14a3e: Status 404 returned error can't find the container with id 43e7336f4cb67b54556008f704cca65b4908dd9dcb7f799634397032b2e14a3e Feb 02 17:20:37 crc kubenswrapper[4858]: I0202 17:20:37.106814 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-g7r5b" event={"ID":"49416635-c370-4a58-aa72-0c1d52fab5f3","Type":"ContainerStarted","Data":"6a7a24b8ee30359f8332299c9bfe3a0598907995f8fa39a8ebfe8de3858f04fe"} Feb 02 17:20:37 crc kubenswrapper[4858]: I0202 17:20:37.106856 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-g7r5b" event={"ID":"49416635-c370-4a58-aa72-0c1d52fab5f3","Type":"ContainerStarted","Data":"43e7336f4cb67b54556008f704cca65b4908dd9dcb7f799634397032b2e14a3e"} Feb 02 17:20:37 crc kubenswrapper[4858]: I0202 17:20:37.107836 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-g7r5b" Feb 02 17:20:37 crc kubenswrapper[4858]: I0202 17:20:37.119216 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-g7r5b" Feb 02 17:20:37 crc kubenswrapper[4858]: I0202 17:20:37.122503 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-g7r5b" podStartSLOduration=2.122486059 podStartE2EDuration="2.122486059s" podCreationTimestamp="2026-02-02 17:20:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:20:37.119112631 +0000 UTC m=+338.271527926" watchObservedRunningTime="2026-02-02 17:20:37.122486059 +0000 UTC m=+338.274901324" Feb 02 17:20:57 crc kubenswrapper[4858]: I0202 17:20:57.808311 4858 patch_prober.go:28] interesting pod/machine-config-daemon-lbvl2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 17:20:57 crc kubenswrapper[4858]: I0202 17:20:57.809181 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 17:21:06 crc kubenswrapper[4858]: I0202 17:21:06.879949 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-llwt8"] Feb 02 17:21:06 crc kubenswrapper[4858]: I0202 17:21:06.887758 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-llwt8" Feb 02 17:21:06 crc kubenswrapper[4858]: I0202 17:21:06.891325 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 02 17:21:06 crc kubenswrapper[4858]: I0202 17:21:06.910013 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-llwt8"] Feb 02 17:21:07 crc kubenswrapper[4858]: I0202 17:21:07.066870 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-phnlb"] Feb 02 17:21:07 crc kubenswrapper[4858]: I0202 17:21:07.071364 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-phnlb" Feb 02 17:21:07 crc kubenswrapper[4858]: I0202 17:21:07.073120 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 02 17:21:07 crc kubenswrapper[4858]: I0202 17:21:07.073863 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-phnlb"] Feb 02 17:21:07 crc kubenswrapper[4858]: I0202 17:21:07.081822 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpdnx\" (UniqueName: \"kubernetes.io/projected/c1397f76-ca47-41cd-860f-4ecb3e5856fb-kube-api-access-zpdnx\") pod \"community-operators-phnlb\" (UID: \"c1397f76-ca47-41cd-860f-4ecb3e5856fb\") " pod="openshift-marketplace/community-operators-phnlb" Feb 02 17:21:07 crc kubenswrapper[4858]: I0202 17:21:07.081880 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/454bd674-ee00-420b-9910-16fe062ea116-utilities\") pod \"redhat-operators-llwt8\" (UID: \"454bd674-ee00-420b-9910-16fe062ea116\") " pod="openshift-marketplace/redhat-operators-llwt8" Feb 02 17:21:07 crc kubenswrapper[4858]: I0202 17:21:07.081907 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/454bd674-ee00-420b-9910-16fe062ea116-catalog-content\") pod \"redhat-operators-llwt8\" (UID: \"454bd674-ee00-420b-9910-16fe062ea116\") " pod="openshift-marketplace/redhat-operators-llwt8" Feb 02 17:21:07 crc kubenswrapper[4858]: I0202 17:21:07.081927 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69q9r\" (UniqueName: \"kubernetes.io/projected/454bd674-ee00-420b-9910-16fe062ea116-kube-api-access-69q9r\") pod \"redhat-operators-llwt8\" (UID: \"454bd674-ee00-420b-9910-16fe062ea116\") " pod="openshift-marketplace/redhat-operators-llwt8" Feb 02 17:21:07 crc kubenswrapper[4858]: I0202 17:21:07.081960 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1397f76-ca47-41cd-860f-4ecb3e5856fb-utilities\") pod \"community-operators-phnlb\" (UID: \"c1397f76-ca47-41cd-860f-4ecb3e5856fb\") " pod="openshift-marketplace/community-operators-phnlb" Feb 02 17:21:07 crc kubenswrapper[4858]: I0202 17:21:07.082004 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1397f76-ca47-41cd-860f-4ecb3e5856fb-catalog-content\") pod \"community-operators-phnlb\" (UID: \"c1397f76-ca47-41cd-860f-4ecb3e5856fb\") " pod="openshift-marketplace/community-operators-phnlb" Feb 02 17:21:07 crc kubenswrapper[4858]: I0202 17:21:07.183003 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69q9r\" (UniqueName: \"kubernetes.io/projected/454bd674-ee00-420b-9910-16fe062ea116-kube-api-access-69q9r\") pod \"redhat-operators-llwt8\" (UID: \"454bd674-ee00-420b-9910-16fe062ea116\") " pod="openshift-marketplace/redhat-operators-llwt8" Feb 02 17:21:07 crc kubenswrapper[4858]: I0202 17:21:07.183383 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1397f76-ca47-41cd-860f-4ecb3e5856fb-utilities\") pod \"community-operators-phnlb\" (UID: \"c1397f76-ca47-41cd-860f-4ecb3e5856fb\") " pod="openshift-marketplace/community-operators-phnlb" Feb 02 17:21:07 crc kubenswrapper[4858]: I0202 17:21:07.183423 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1397f76-ca47-41cd-860f-4ecb3e5856fb-catalog-content\") pod \"community-operators-phnlb\" (UID: \"c1397f76-ca47-41cd-860f-4ecb3e5856fb\") " pod="openshift-marketplace/community-operators-phnlb" Feb 02 17:21:07 crc kubenswrapper[4858]: I0202 17:21:07.183454 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpdnx\" (UniqueName: \"kubernetes.io/projected/c1397f76-ca47-41cd-860f-4ecb3e5856fb-kube-api-access-zpdnx\") pod \"community-operators-phnlb\" (UID: \"c1397f76-ca47-41cd-860f-4ecb3e5856fb\") " pod="openshift-marketplace/community-operators-phnlb" Feb 02 17:21:07 crc kubenswrapper[4858]: I0202 17:21:07.183512 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/454bd674-ee00-420b-9910-16fe062ea116-utilities\") pod \"redhat-operators-llwt8\" (UID: \"454bd674-ee00-420b-9910-16fe062ea116\") " pod="openshift-marketplace/redhat-operators-llwt8" Feb 02 17:21:07 crc kubenswrapper[4858]: I0202 17:21:07.183543 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/454bd674-ee00-420b-9910-16fe062ea116-catalog-content\") pod \"redhat-operators-llwt8\" (UID: \"454bd674-ee00-420b-9910-16fe062ea116\") " pod="openshift-marketplace/redhat-operators-llwt8" Feb 02 17:21:07 crc kubenswrapper[4858]: I0202 17:21:07.184532 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/454bd674-ee00-420b-9910-16fe062ea116-catalog-content\") pod \"redhat-operators-llwt8\" (UID: \"454bd674-ee00-420b-9910-16fe062ea116\") " pod="openshift-marketplace/redhat-operators-llwt8" Feb 02 17:21:07 crc kubenswrapper[4858]: I0202 17:21:07.184533 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/454bd674-ee00-420b-9910-16fe062ea116-utilities\") pod \"redhat-operators-llwt8\" (UID: \"454bd674-ee00-420b-9910-16fe062ea116\") " pod="openshift-marketplace/redhat-operators-llwt8" Feb 02 17:21:07 crc kubenswrapper[4858]: I0202 17:21:07.184923 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1397f76-ca47-41cd-860f-4ecb3e5856fb-utilities\") pod \"community-operators-phnlb\" (UID: \"c1397f76-ca47-41cd-860f-4ecb3e5856fb\") " pod="openshift-marketplace/community-operators-phnlb" Feb 02 17:21:07 crc kubenswrapper[4858]: I0202 17:21:07.184930 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1397f76-ca47-41cd-860f-4ecb3e5856fb-catalog-content\") pod \"community-operators-phnlb\" (UID: \"c1397f76-ca47-41cd-860f-4ecb3e5856fb\") " pod="openshift-marketplace/community-operators-phnlb" Feb 02 17:21:07 crc kubenswrapper[4858]: I0202 17:21:07.206892 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpdnx\" (UniqueName: \"kubernetes.io/projected/c1397f76-ca47-41cd-860f-4ecb3e5856fb-kube-api-access-zpdnx\") pod \"community-operators-phnlb\" (UID: \"c1397f76-ca47-41cd-860f-4ecb3e5856fb\") " pod="openshift-marketplace/community-operators-phnlb" Feb 02 17:21:07 crc kubenswrapper[4858]: I0202 17:21:07.213257 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69q9r\" (UniqueName: \"kubernetes.io/projected/454bd674-ee00-420b-9910-16fe062ea116-kube-api-access-69q9r\") pod \"redhat-operators-llwt8\" (UID: \"454bd674-ee00-420b-9910-16fe062ea116\") " pod="openshift-marketplace/redhat-operators-llwt8" Feb 02 17:21:07 crc kubenswrapper[4858]: I0202 17:21:07.252241 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-llwt8" Feb 02 17:21:07 crc kubenswrapper[4858]: I0202 17:21:07.390082 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-phnlb" Feb 02 17:21:07 crc kubenswrapper[4858]: I0202 17:21:07.435143 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-llwt8"] Feb 02 17:21:07 crc kubenswrapper[4858]: I0202 17:21:07.557314 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-phnlb"] Feb 02 17:21:08 crc kubenswrapper[4858]: I0202 17:21:08.320879 4858 generic.go:334] "Generic (PLEG): container finished" podID="c1397f76-ca47-41cd-860f-4ecb3e5856fb" containerID="6ae3b7f7d7096ee4b0d1523f14cea767ee545d93b4717f83c691edbaf261b5c9" exitCode=0 Feb 02 17:21:08 crc kubenswrapper[4858]: I0202 17:21:08.321027 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-phnlb" event={"ID":"c1397f76-ca47-41cd-860f-4ecb3e5856fb","Type":"ContainerDied","Data":"6ae3b7f7d7096ee4b0d1523f14cea767ee545d93b4717f83c691edbaf261b5c9"} Feb 02 17:21:08 crc kubenswrapper[4858]: I0202 17:21:08.321248 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-phnlb" event={"ID":"c1397f76-ca47-41cd-860f-4ecb3e5856fb","Type":"ContainerStarted","Data":"d2565bde36c47f78df7a3eca33cb813f0fdfd2ec1d0bc98e326fc40827efede2"} Feb 02 17:21:08 crc kubenswrapper[4858]: I0202 17:21:08.322876 4858 generic.go:334] "Generic (PLEG): container finished" podID="454bd674-ee00-420b-9910-16fe062ea116" containerID="910ddfe8c7af8be56db348be51d783df2fdc86d7e107a97e57831e87cacb65e0" exitCode=0 Feb 02 17:21:08 crc kubenswrapper[4858]: I0202 17:21:08.322917 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-llwt8" event={"ID":"454bd674-ee00-420b-9910-16fe062ea116","Type":"ContainerDied","Data":"910ddfe8c7af8be56db348be51d783df2fdc86d7e107a97e57831e87cacb65e0"} Feb 02 17:21:08 crc kubenswrapper[4858]: I0202 17:21:08.322945 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-llwt8" event={"ID":"454bd674-ee00-420b-9910-16fe062ea116","Type":"ContainerStarted","Data":"d0378664750562afdfc490b4f1421fb3ec7d9cac4b0eeed0e44f048c2915c299"} Feb 02 17:21:09 crc kubenswrapper[4858]: I0202 17:21:09.265949 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-q6b5c"] Feb 02 17:21:09 crc kubenswrapper[4858]: I0202 17:21:09.267090 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q6b5c" Feb 02 17:21:09 crc kubenswrapper[4858]: I0202 17:21:09.270738 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 02 17:21:09 crc kubenswrapper[4858]: I0202 17:21:09.283698 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q6b5c"] Feb 02 17:21:09 crc kubenswrapper[4858]: I0202 17:21:09.329027 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-phnlb" event={"ID":"c1397f76-ca47-41cd-860f-4ecb3e5856fb","Type":"ContainerStarted","Data":"c9e9f37c0e2931ffa3acc1b2a56e7a0ebe34aab44c2811d88865b114f55901a8"} Feb 02 17:21:09 crc kubenswrapper[4858]: I0202 17:21:09.330870 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-llwt8" event={"ID":"454bd674-ee00-420b-9910-16fe062ea116","Type":"ContainerStarted","Data":"a142e834f4fa16c06e70ca95d1cc59c446a1ee2b3a9c0537c9be088dd47a949f"} Feb 02 17:21:09 crc kubenswrapper[4858]: I0202 17:21:09.411425 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05b84894-e183-4874-8ca5-002436026fce-utilities\") pod \"certified-operators-q6b5c\" (UID: \"05b84894-e183-4874-8ca5-002436026fce\") " pod="openshift-marketplace/certified-operators-q6b5c" Feb 02 17:21:09 crc kubenswrapper[4858]: I0202 17:21:09.411828 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvrf8\" (UniqueName: \"kubernetes.io/projected/05b84894-e183-4874-8ca5-002436026fce-kube-api-access-zvrf8\") pod \"certified-operators-q6b5c\" (UID: \"05b84894-e183-4874-8ca5-002436026fce\") " pod="openshift-marketplace/certified-operators-q6b5c" Feb 02 17:21:09 crc kubenswrapper[4858]: I0202 17:21:09.411942 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05b84894-e183-4874-8ca5-002436026fce-catalog-content\") pod \"certified-operators-q6b5c\" (UID: \"05b84894-e183-4874-8ca5-002436026fce\") " pod="openshift-marketplace/certified-operators-q6b5c" Feb 02 17:21:09 crc kubenswrapper[4858]: I0202 17:21:09.463807 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-c22x2"] Feb 02 17:21:09 crc kubenswrapper[4858]: I0202 17:21:09.464780 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c22x2" Feb 02 17:21:09 crc kubenswrapper[4858]: I0202 17:21:09.466193 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 02 17:21:09 crc kubenswrapper[4858]: I0202 17:21:09.473942 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c22x2"] Feb 02 17:21:09 crc kubenswrapper[4858]: I0202 17:21:09.512928 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05b84894-e183-4874-8ca5-002436026fce-utilities\") pod \"certified-operators-q6b5c\" (UID: \"05b84894-e183-4874-8ca5-002436026fce\") " pod="openshift-marketplace/certified-operators-q6b5c" Feb 02 17:21:09 crc kubenswrapper[4858]: I0202 17:21:09.512995 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvrf8\" (UniqueName: \"kubernetes.io/projected/05b84894-e183-4874-8ca5-002436026fce-kube-api-access-zvrf8\") pod \"certified-operators-q6b5c\" (UID: \"05b84894-e183-4874-8ca5-002436026fce\") " pod="openshift-marketplace/certified-operators-q6b5c" Feb 02 17:21:09 crc kubenswrapper[4858]: I0202 17:21:09.513216 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05b84894-e183-4874-8ca5-002436026fce-catalog-content\") pod \"certified-operators-q6b5c\" (UID: \"05b84894-e183-4874-8ca5-002436026fce\") " pod="openshift-marketplace/certified-operators-q6b5c" Feb 02 17:21:09 crc kubenswrapper[4858]: I0202 17:21:09.513546 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05b84894-e183-4874-8ca5-002436026fce-utilities\") pod \"certified-operators-q6b5c\" (UID: \"05b84894-e183-4874-8ca5-002436026fce\") " pod="openshift-marketplace/certified-operators-q6b5c" Feb 02 17:21:09 crc kubenswrapper[4858]: I0202 17:21:09.513644 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05b84894-e183-4874-8ca5-002436026fce-catalog-content\") pod \"certified-operators-q6b5c\" (UID: \"05b84894-e183-4874-8ca5-002436026fce\") " pod="openshift-marketplace/certified-operators-q6b5c" Feb 02 17:21:09 crc kubenswrapper[4858]: I0202 17:21:09.529164 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvrf8\" (UniqueName: \"kubernetes.io/projected/05b84894-e183-4874-8ca5-002436026fce-kube-api-access-zvrf8\") pod \"certified-operators-q6b5c\" (UID: \"05b84894-e183-4874-8ca5-002436026fce\") " pod="openshift-marketplace/certified-operators-q6b5c" Feb 02 17:21:09 crc kubenswrapper[4858]: I0202 17:21:09.614830 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp9xr\" (UniqueName: \"kubernetes.io/projected/852607b1-d3cd-4688-a469-872ae6c5e98d-kube-api-access-dp9xr\") pod \"redhat-marketplace-c22x2\" (UID: \"852607b1-d3cd-4688-a469-872ae6c5e98d\") " pod="openshift-marketplace/redhat-marketplace-c22x2" Feb 02 17:21:09 crc kubenswrapper[4858]: I0202 17:21:09.615043 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/852607b1-d3cd-4688-a469-872ae6c5e98d-utilities\") pod \"redhat-marketplace-c22x2\" (UID: \"852607b1-d3cd-4688-a469-872ae6c5e98d\") " pod="openshift-marketplace/redhat-marketplace-c22x2" Feb 02 17:21:09 crc kubenswrapper[4858]: I0202 17:21:09.615622 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/852607b1-d3cd-4688-a469-872ae6c5e98d-catalog-content\") pod \"redhat-marketplace-c22x2\" (UID: \"852607b1-d3cd-4688-a469-872ae6c5e98d\") " pod="openshift-marketplace/redhat-marketplace-c22x2" Feb 02 17:21:09 crc kubenswrapper[4858]: I0202 17:21:09.629997 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q6b5c" Feb 02 17:21:09 crc kubenswrapper[4858]: I0202 17:21:09.716753 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp9xr\" (UniqueName: \"kubernetes.io/projected/852607b1-d3cd-4688-a469-872ae6c5e98d-kube-api-access-dp9xr\") pod \"redhat-marketplace-c22x2\" (UID: \"852607b1-d3cd-4688-a469-872ae6c5e98d\") " pod="openshift-marketplace/redhat-marketplace-c22x2" Feb 02 17:21:09 crc kubenswrapper[4858]: I0202 17:21:09.717030 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/852607b1-d3cd-4688-a469-872ae6c5e98d-utilities\") pod \"redhat-marketplace-c22x2\" (UID: \"852607b1-d3cd-4688-a469-872ae6c5e98d\") " pod="openshift-marketplace/redhat-marketplace-c22x2" Feb 02 17:21:09 crc kubenswrapper[4858]: I0202 17:21:09.717089 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/852607b1-d3cd-4688-a469-872ae6c5e98d-catalog-content\") pod \"redhat-marketplace-c22x2\" (UID: \"852607b1-d3cd-4688-a469-872ae6c5e98d\") " pod="openshift-marketplace/redhat-marketplace-c22x2" Feb 02 17:21:09 crc kubenswrapper[4858]: I0202 17:21:09.717562 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/852607b1-d3cd-4688-a469-872ae6c5e98d-catalog-content\") pod \"redhat-marketplace-c22x2\" (UID: \"852607b1-d3cd-4688-a469-872ae6c5e98d\") " pod="openshift-marketplace/redhat-marketplace-c22x2" Feb 02 17:21:09 crc kubenswrapper[4858]: I0202 17:21:09.717669 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/852607b1-d3cd-4688-a469-872ae6c5e98d-utilities\") pod \"redhat-marketplace-c22x2\" (UID: \"852607b1-d3cd-4688-a469-872ae6c5e98d\") " pod="openshift-marketplace/redhat-marketplace-c22x2" Feb 02 17:21:09 crc kubenswrapper[4858]: I0202 17:21:09.735712 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp9xr\" (UniqueName: \"kubernetes.io/projected/852607b1-d3cd-4688-a469-872ae6c5e98d-kube-api-access-dp9xr\") pod \"redhat-marketplace-c22x2\" (UID: \"852607b1-d3cd-4688-a469-872ae6c5e98d\") " pod="openshift-marketplace/redhat-marketplace-c22x2" Feb 02 17:21:09 crc kubenswrapper[4858]: I0202 17:21:09.864318 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c22x2" Feb 02 17:21:10 crc kubenswrapper[4858]: I0202 17:21:10.021464 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q6b5c"] Feb 02 17:21:10 crc kubenswrapper[4858]: I0202 17:21:10.080010 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c22x2"] Feb 02 17:21:10 crc kubenswrapper[4858]: I0202 17:21:10.337145 4858 generic.go:334] "Generic (PLEG): container finished" podID="852607b1-d3cd-4688-a469-872ae6c5e98d" containerID="639f0078aeb5a3101a8cbcb6f5700ea2f0503df3e9eef22dfc07159f7829e0fc" exitCode=0 Feb 02 17:21:10 crc kubenswrapper[4858]: I0202 17:21:10.337219 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c22x2" event={"ID":"852607b1-d3cd-4688-a469-872ae6c5e98d","Type":"ContainerDied","Data":"639f0078aeb5a3101a8cbcb6f5700ea2f0503df3e9eef22dfc07159f7829e0fc"} Feb 02 17:21:10 crc kubenswrapper[4858]: I0202 17:21:10.337281 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c22x2" event={"ID":"852607b1-d3cd-4688-a469-872ae6c5e98d","Type":"ContainerStarted","Data":"dc52365dc70e246411e70c872ccd15f5d1fbc5bdbdc79c83cb9b6844d6a4e89d"} Feb 02 17:21:10 crc kubenswrapper[4858]: I0202 17:21:10.340320 4858 generic.go:334] "Generic (PLEG): container finished" podID="c1397f76-ca47-41cd-860f-4ecb3e5856fb" containerID="c9e9f37c0e2931ffa3acc1b2a56e7a0ebe34aab44c2811d88865b114f55901a8" exitCode=0 Feb 02 17:21:10 crc kubenswrapper[4858]: I0202 17:21:10.340399 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-phnlb" event={"ID":"c1397f76-ca47-41cd-860f-4ecb3e5856fb","Type":"ContainerDied","Data":"c9e9f37c0e2931ffa3acc1b2a56e7a0ebe34aab44c2811d88865b114f55901a8"} Feb 02 17:21:10 crc kubenswrapper[4858]: I0202 17:21:10.345907 4858 generic.go:334] "Generic (PLEG): container finished" podID="454bd674-ee00-420b-9910-16fe062ea116" containerID="a142e834f4fa16c06e70ca95d1cc59c446a1ee2b3a9c0537c9be088dd47a949f" exitCode=0 Feb 02 17:21:10 crc kubenswrapper[4858]: I0202 17:21:10.346038 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-llwt8" event={"ID":"454bd674-ee00-420b-9910-16fe062ea116","Type":"ContainerDied","Data":"a142e834f4fa16c06e70ca95d1cc59c446a1ee2b3a9c0537c9be088dd47a949f"} Feb 02 17:21:10 crc kubenswrapper[4858]: I0202 17:21:10.349037 4858 generic.go:334] "Generic (PLEG): container finished" podID="05b84894-e183-4874-8ca5-002436026fce" containerID="430a71599501563e09dcbfc670afec364e19f8a55868e88fc3280bf1d2dcc017" exitCode=0 Feb 02 17:21:10 crc kubenswrapper[4858]: I0202 17:21:10.349065 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q6b5c" event={"ID":"05b84894-e183-4874-8ca5-002436026fce","Type":"ContainerDied","Data":"430a71599501563e09dcbfc670afec364e19f8a55868e88fc3280bf1d2dcc017"} Feb 02 17:21:10 crc kubenswrapper[4858]: I0202 17:21:10.349089 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q6b5c" event={"ID":"05b84894-e183-4874-8ca5-002436026fce","Type":"ContainerStarted","Data":"f3a33f0e8e45332f85d00a4ecd37df06cb06524eb5be68540cf4eca921af0a14"} Feb 02 17:21:11 crc kubenswrapper[4858]: I0202 17:21:11.355780 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-phnlb" event={"ID":"c1397f76-ca47-41cd-860f-4ecb3e5856fb","Type":"ContainerStarted","Data":"2123d8b2cc2462f365a67502cd7cb4469241ce7a25013288f28c39ecaad8e617"} Feb 02 17:21:11 crc kubenswrapper[4858]: I0202 17:21:11.357601 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-llwt8" event={"ID":"454bd674-ee00-420b-9910-16fe062ea116","Type":"ContainerStarted","Data":"c379d236c005c09f659c7fa5b2706184362526c25e35f3270d05b9c1251ce020"} Feb 02 17:21:11 crc kubenswrapper[4858]: I0202 17:21:11.364387 4858 generic.go:334] "Generic (PLEG): container finished" podID="05b84894-e183-4874-8ca5-002436026fce" containerID="e1a40014ccc5c6c80130d287dcf48454f9c0e2100654a21d45ba24995e2dc116" exitCode=0 Feb 02 17:21:11 crc kubenswrapper[4858]: I0202 17:21:11.364439 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q6b5c" event={"ID":"05b84894-e183-4874-8ca5-002436026fce","Type":"ContainerDied","Data":"e1a40014ccc5c6c80130d287dcf48454f9c0e2100654a21d45ba24995e2dc116"} Feb 02 17:21:11 crc kubenswrapper[4858]: I0202 17:21:11.366772 4858 generic.go:334] "Generic (PLEG): container finished" podID="852607b1-d3cd-4688-a469-872ae6c5e98d" containerID="9ef22c75bc134c96b88de605ba53d7c9427c85f5ba46dd00f8f1c56037f985cd" exitCode=0 Feb 02 17:21:11 crc kubenswrapper[4858]: I0202 17:21:11.366809 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c22x2" event={"ID":"852607b1-d3cd-4688-a469-872ae6c5e98d","Type":"ContainerDied","Data":"9ef22c75bc134c96b88de605ba53d7c9427c85f5ba46dd00f8f1c56037f985cd"} Feb 02 17:21:11 crc kubenswrapper[4858]: I0202 17:21:11.392664 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-phnlb" podStartSLOduration=1.98778454 podStartE2EDuration="4.392645029s" podCreationTimestamp="2026-02-02 17:21:07 +0000 UTC" firstStartedPulling="2026-02-02 17:21:08.324948669 +0000 UTC m=+369.477363934" lastFinishedPulling="2026-02-02 17:21:10.729809158 +0000 UTC m=+371.882224423" observedRunningTime="2026-02-02 17:21:11.377176216 +0000 UTC m=+372.529591481" watchObservedRunningTime="2026-02-02 17:21:11.392645029 +0000 UTC m=+372.545060304" Feb 02 17:21:11 crc kubenswrapper[4858]: I0202 17:21:11.415283 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-llwt8" podStartSLOduration=2.974194782 podStartE2EDuration="5.41526377s" podCreationTimestamp="2026-02-02 17:21:06 +0000 UTC" firstStartedPulling="2026-02-02 17:21:08.325116104 +0000 UTC m=+369.477531369" lastFinishedPulling="2026-02-02 17:21:10.766185092 +0000 UTC m=+371.918600357" observedRunningTime="2026-02-02 17:21:11.409921444 +0000 UTC m=+372.562336729" watchObservedRunningTime="2026-02-02 17:21:11.41526377 +0000 UTC m=+372.567679035" Feb 02 17:21:13 crc kubenswrapper[4858]: I0202 17:21:13.380688 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c22x2" event={"ID":"852607b1-d3cd-4688-a469-872ae6c5e98d","Type":"ContainerStarted","Data":"ffc5d11e546ae84cfee893af4106229ecc83289b2a4889fb4802bc59c41f3819"} Feb 02 17:21:13 crc kubenswrapper[4858]: I0202 17:21:13.382722 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q6b5c" event={"ID":"05b84894-e183-4874-8ca5-002436026fce","Type":"ContainerStarted","Data":"66a12e66906837936eec7665de810d4d03be654a95b4141bef41623209fe3e4c"} Feb 02 17:21:13 crc kubenswrapper[4858]: I0202 17:21:13.396740 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-c22x2" podStartSLOduration=2.924070482 podStartE2EDuration="4.396724343s" podCreationTimestamp="2026-02-02 17:21:09 +0000 UTC" firstStartedPulling="2026-02-02 17:21:10.339671425 +0000 UTC m=+371.492086690" lastFinishedPulling="2026-02-02 17:21:11.812325286 +0000 UTC m=+372.964740551" observedRunningTime="2026-02-02 17:21:13.392789828 +0000 UTC m=+374.545205093" watchObservedRunningTime="2026-02-02 17:21:13.396724343 +0000 UTC m=+374.549139608" Feb 02 17:21:13 crc kubenswrapper[4858]: I0202 17:21:13.414605 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-q6b5c" podStartSLOduration=2.81731704 podStartE2EDuration="4.414592306s" podCreationTimestamp="2026-02-02 17:21:09 +0000 UTC" firstStartedPulling="2026-02-02 17:21:10.350845142 +0000 UTC m=+371.503260407" lastFinishedPulling="2026-02-02 17:21:11.948120408 +0000 UTC m=+373.100535673" observedRunningTime="2026-02-02 17:21:13.412113333 +0000 UTC m=+374.564528598" watchObservedRunningTime="2026-02-02 17:21:13.414592306 +0000 UTC m=+374.567007571" Feb 02 17:21:16 crc kubenswrapper[4858]: I0202 17:21:16.789566 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-pz65k"] Feb 02 17:21:16 crc kubenswrapper[4858]: I0202 17:21:16.791064 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-pz65k" Feb 02 17:21:16 crc kubenswrapper[4858]: I0202 17:21:16.806548 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-pz65k"] Feb 02 17:21:16 crc kubenswrapper[4858]: I0202 17:21:16.914907 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4-registry-tls\") pod \"image-registry-66df7c8f76-pz65k\" (UID: \"c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4\") " pod="openshift-image-registry/image-registry-66df7c8f76-pz65k" Feb 02 17:21:16 crc kubenswrapper[4858]: I0202 17:21:16.915182 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4-trusted-ca\") pod \"image-registry-66df7c8f76-pz65k\" (UID: \"c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4\") " pod="openshift-image-registry/image-registry-66df7c8f76-pz65k" Feb 02 17:21:16 crc kubenswrapper[4858]: I0202 17:21:16.915320 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-pz65k\" (UID: \"c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4\") " pod="openshift-image-registry/image-registry-66df7c8f76-pz65k" Feb 02 17:21:16 crc kubenswrapper[4858]: I0202 17:21:16.915444 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p8mn\" (UniqueName: \"kubernetes.io/projected/c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4-kube-api-access-9p8mn\") pod \"image-registry-66df7c8f76-pz65k\" (UID: \"c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4\") " pod="openshift-image-registry/image-registry-66df7c8f76-pz65k" Feb 02 17:21:16 crc kubenswrapper[4858]: I0202 17:21:16.915536 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4-installation-pull-secrets\") pod \"image-registry-66df7c8f76-pz65k\" (UID: \"c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4\") " pod="openshift-image-registry/image-registry-66df7c8f76-pz65k" Feb 02 17:21:16 crc kubenswrapper[4858]: I0202 17:21:16.915641 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4-ca-trust-extracted\") pod \"image-registry-66df7c8f76-pz65k\" (UID: \"c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4\") " pod="openshift-image-registry/image-registry-66df7c8f76-pz65k" Feb 02 17:21:16 crc kubenswrapper[4858]: I0202 17:21:16.915737 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4-registry-certificates\") pod \"image-registry-66df7c8f76-pz65k\" (UID: \"c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4\") " pod="openshift-image-registry/image-registry-66df7c8f76-pz65k" Feb 02 17:21:16 crc kubenswrapper[4858]: I0202 17:21:16.915851 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4-bound-sa-token\") pod \"image-registry-66df7c8f76-pz65k\" (UID: \"c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4\") " pod="openshift-image-registry/image-registry-66df7c8f76-pz65k" Feb 02 17:21:16 crc kubenswrapper[4858]: I0202 17:21:16.950966 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-pz65k\" (UID: \"c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4\") " pod="openshift-image-registry/image-registry-66df7c8f76-pz65k" Feb 02 17:21:17 crc kubenswrapper[4858]: I0202 17:21:17.017302 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4-registry-certificates\") pod \"image-registry-66df7c8f76-pz65k\" (UID: \"c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4\") " pod="openshift-image-registry/image-registry-66df7c8f76-pz65k" Feb 02 17:21:17 crc kubenswrapper[4858]: I0202 17:21:17.017370 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4-bound-sa-token\") pod \"image-registry-66df7c8f76-pz65k\" (UID: \"c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4\") " pod="openshift-image-registry/image-registry-66df7c8f76-pz65k" Feb 02 17:21:17 crc kubenswrapper[4858]: I0202 17:21:17.017400 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4-registry-tls\") pod \"image-registry-66df7c8f76-pz65k\" (UID: \"c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4\") " pod="openshift-image-registry/image-registry-66df7c8f76-pz65k" Feb 02 17:21:17 crc kubenswrapper[4858]: I0202 17:21:17.017433 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4-trusted-ca\") pod \"image-registry-66df7c8f76-pz65k\" (UID: \"c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4\") " pod="openshift-image-registry/image-registry-66df7c8f76-pz65k" Feb 02 17:21:17 crc kubenswrapper[4858]: I0202 17:21:17.017491 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p8mn\" (UniqueName: \"kubernetes.io/projected/c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4-kube-api-access-9p8mn\") pod \"image-registry-66df7c8f76-pz65k\" (UID: \"c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4\") " pod="openshift-image-registry/image-registry-66df7c8f76-pz65k" Feb 02 17:21:17 crc kubenswrapper[4858]: I0202 17:21:17.017527 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4-installation-pull-secrets\") pod \"image-registry-66df7c8f76-pz65k\" (UID: \"c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4\") " pod="openshift-image-registry/image-registry-66df7c8f76-pz65k" Feb 02 17:21:17 crc kubenswrapper[4858]: I0202 17:21:17.017551 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4-ca-trust-extracted\") pod \"image-registry-66df7c8f76-pz65k\" (UID: \"c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4\") " pod="openshift-image-registry/image-registry-66df7c8f76-pz65k" Feb 02 17:21:17 crc kubenswrapper[4858]: I0202 17:21:17.018353 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4-ca-trust-extracted\") pod \"image-registry-66df7c8f76-pz65k\" (UID: \"c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4\") " pod="openshift-image-registry/image-registry-66df7c8f76-pz65k" Feb 02 17:21:17 crc kubenswrapper[4858]: I0202 17:21:17.019259 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4-registry-certificates\") pod \"image-registry-66df7c8f76-pz65k\" (UID: \"c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4\") " pod="openshift-image-registry/image-registry-66df7c8f76-pz65k" Feb 02 17:21:17 crc kubenswrapper[4858]: I0202 17:21:17.019345 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4-trusted-ca\") pod \"image-registry-66df7c8f76-pz65k\" (UID: \"c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4\") " pod="openshift-image-registry/image-registry-66df7c8f76-pz65k" Feb 02 17:21:17 crc kubenswrapper[4858]: I0202 17:21:17.026666 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4-registry-tls\") pod \"image-registry-66df7c8f76-pz65k\" (UID: \"c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4\") " pod="openshift-image-registry/image-registry-66df7c8f76-pz65k" Feb 02 17:21:17 crc kubenswrapper[4858]: I0202 17:21:17.026669 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4-installation-pull-secrets\") pod \"image-registry-66df7c8f76-pz65k\" (UID: \"c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4\") " pod="openshift-image-registry/image-registry-66df7c8f76-pz65k" Feb 02 17:21:17 crc kubenswrapper[4858]: I0202 17:21:17.034440 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4-bound-sa-token\") pod \"image-registry-66df7c8f76-pz65k\" (UID: \"c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4\") " pod="openshift-image-registry/image-registry-66df7c8f76-pz65k" Feb 02 17:21:17 crc kubenswrapper[4858]: I0202 17:21:17.038404 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p8mn\" (UniqueName: \"kubernetes.io/projected/c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4-kube-api-access-9p8mn\") pod \"image-registry-66df7c8f76-pz65k\" (UID: \"c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4\") " pod="openshift-image-registry/image-registry-66df7c8f76-pz65k" Feb 02 17:21:17 crc kubenswrapper[4858]: I0202 17:21:17.111595 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-pz65k" Feb 02 17:21:17 crc kubenswrapper[4858]: I0202 17:21:17.252516 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-llwt8" Feb 02 17:21:17 crc kubenswrapper[4858]: I0202 17:21:17.253642 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-llwt8" Feb 02 17:21:17 crc kubenswrapper[4858]: I0202 17:21:17.293178 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-llwt8" Feb 02 17:21:17 crc kubenswrapper[4858]: I0202 17:21:17.390383 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-phnlb" Feb 02 17:21:17 crc kubenswrapper[4858]: I0202 17:21:17.390456 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-phnlb" Feb 02 17:21:17 crc kubenswrapper[4858]: I0202 17:21:17.427334 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-phnlb" Feb 02 17:21:17 crc kubenswrapper[4858]: I0202 17:21:17.457743 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-llwt8" Feb 02 17:21:17 crc kubenswrapper[4858]: I0202 17:21:17.510850 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-phnlb" Feb 02 17:21:17 crc kubenswrapper[4858]: I0202 17:21:17.556846 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-pz65k"] Feb 02 17:21:18 crc kubenswrapper[4858]: I0202 17:21:18.418539 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-pz65k" event={"ID":"c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4","Type":"ContainerStarted","Data":"9677fa77ef2baf776df26e006431e9940c933e476b90a6473bf9ad476cf3dec6"} Feb 02 17:21:18 crc kubenswrapper[4858]: I0202 17:21:18.418963 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-pz65k" event={"ID":"c3a9aa71-f89d-4ef1-a20d-7ce43ab14cf4","Type":"ContainerStarted","Data":"7c2d1af75a1efceb7b8bd73ec0d40b476d0e8b000519cd1928ed5c7dac31d07b"} Feb 02 17:21:18 crc kubenswrapper[4858]: I0202 17:21:18.442343 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-pz65k" podStartSLOduration=2.442326101 podStartE2EDuration="2.442326101s" podCreationTimestamp="2026-02-02 17:21:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:21:18.441081324 +0000 UTC m=+379.593496609" watchObservedRunningTime="2026-02-02 17:21:18.442326101 +0000 UTC m=+379.594741366" Feb 02 17:21:19 crc kubenswrapper[4858]: I0202 17:21:19.424055 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-pz65k" Feb 02 17:21:19 crc kubenswrapper[4858]: I0202 17:21:19.630861 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-q6b5c" Feb 02 17:21:19 crc kubenswrapper[4858]: I0202 17:21:19.630923 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-q6b5c" Feb 02 17:21:19 crc kubenswrapper[4858]: I0202 17:21:19.691085 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-q6b5c" Feb 02 17:21:19 crc kubenswrapper[4858]: I0202 17:21:19.864911 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-c22x2" Feb 02 17:21:19 crc kubenswrapper[4858]: I0202 17:21:19.865762 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-c22x2" Feb 02 17:21:19 crc kubenswrapper[4858]: I0202 17:21:19.926065 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-c22x2" Feb 02 17:21:20 crc kubenswrapper[4858]: I0202 17:21:20.500419 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-q6b5c" Feb 02 17:21:20 crc kubenswrapper[4858]: I0202 17:21:20.500714 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-c22x2" Feb 02 17:21:27 crc kubenswrapper[4858]: I0202 17:21:27.808216 4858 patch_prober.go:28] interesting pod/machine-config-daemon-lbvl2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 17:21:27 crc kubenswrapper[4858]: I0202 17:21:27.809081 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 17:21:37 crc kubenswrapper[4858]: I0202 17:21:37.124315 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-pz65k" Feb 02 17:21:37 crc kubenswrapper[4858]: I0202 17:21:37.205263 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j5zlt"] Feb 02 17:21:57 crc kubenswrapper[4858]: I0202 17:21:57.807529 4858 patch_prober.go:28] interesting pod/machine-config-daemon-lbvl2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 17:21:57 crc kubenswrapper[4858]: I0202 17:21:57.808217 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 17:21:57 crc kubenswrapper[4858]: I0202 17:21:57.808278 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" Feb 02 17:21:57 crc kubenswrapper[4858]: I0202 17:21:57.808944 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"53c039250f690ce1254a34f24b2227f388a22d8e62f92b86cf497d453228deae"} pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 17:21:57 crc kubenswrapper[4858]: I0202 17:21:57.809026 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" containerID="cri-o://53c039250f690ce1254a34f24b2227f388a22d8e62f92b86cf497d453228deae" gracePeriod=600 Feb 02 17:21:58 crc kubenswrapper[4858]: I0202 17:21:58.685835 4858 generic.go:334] "Generic (PLEG): container finished" podID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerID="53c039250f690ce1254a34f24b2227f388a22d8e62f92b86cf497d453228deae" exitCode=0 Feb 02 17:21:58 crc kubenswrapper[4858]: I0202 17:21:58.685907 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" event={"ID":"d03a4872-ca6a-4233-bdbf-b31f7890dc3e","Type":"ContainerDied","Data":"53c039250f690ce1254a34f24b2227f388a22d8e62f92b86cf497d453228deae"} Feb 02 17:21:58 crc kubenswrapper[4858]: I0202 17:21:58.686549 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" event={"ID":"d03a4872-ca6a-4233-bdbf-b31f7890dc3e","Type":"ContainerStarted","Data":"bb53defa249b6a080019d6db0213995becaf964ff75fe4b36f783c31a6f70e41"} Feb 02 17:21:58 crc kubenswrapper[4858]: I0202 17:21:58.686584 4858 scope.go:117] "RemoveContainer" containerID="a3060eb52c9c081c561e2b97588ea0418eadc0e227307715ef71d48fb42bbf57" Feb 02 17:22:02 crc kubenswrapper[4858]: I0202 17:22:02.268338 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" podUID="c022725c-9725-4d5c-a703-5d61c931d9e8" containerName="registry" containerID="cri-o://b30ae748918a03768dc1fa47e8a2fdc80fb91d57ecb5e1002063b8621346e4d8" gracePeriod=30 Feb 02 17:22:02 crc kubenswrapper[4858]: I0202 17:22:02.702398 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:22:02 crc kubenswrapper[4858]: I0202 17:22:02.723957 4858 generic.go:334] "Generic (PLEG): container finished" podID="c022725c-9725-4d5c-a703-5d61c931d9e8" containerID="b30ae748918a03768dc1fa47e8a2fdc80fb91d57ecb5e1002063b8621346e4d8" exitCode=0 Feb 02 17:22:02 crc kubenswrapper[4858]: I0202 17:22:02.724035 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" event={"ID":"c022725c-9725-4d5c-a703-5d61c931d9e8","Type":"ContainerDied","Data":"b30ae748918a03768dc1fa47e8a2fdc80fb91d57ecb5e1002063b8621346e4d8"} Feb 02 17:22:02 crc kubenswrapper[4858]: I0202 17:22:02.724065 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" event={"ID":"c022725c-9725-4d5c-a703-5d61c931d9e8","Type":"ContainerDied","Data":"45bfd244eab3a347ecd468019c62f97040760531ab76bd38666c1a494cb3952c"} Feb 02 17:22:02 crc kubenswrapper[4858]: I0202 17:22:02.724086 4858 scope.go:117] "RemoveContainer" containerID="b30ae748918a03768dc1fa47e8a2fdc80fb91d57ecb5e1002063b8621346e4d8" Feb 02 17:22:02 crc kubenswrapper[4858]: I0202 17:22:02.724208 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-j5zlt" Feb 02 17:22:02 crc kubenswrapper[4858]: I0202 17:22:02.766055 4858 scope.go:117] "RemoveContainer" containerID="b30ae748918a03768dc1fa47e8a2fdc80fb91d57ecb5e1002063b8621346e4d8" Feb 02 17:22:02 crc kubenswrapper[4858]: E0202 17:22:02.766635 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b30ae748918a03768dc1fa47e8a2fdc80fb91d57ecb5e1002063b8621346e4d8\": container with ID starting with b30ae748918a03768dc1fa47e8a2fdc80fb91d57ecb5e1002063b8621346e4d8 not found: ID does not exist" containerID="b30ae748918a03768dc1fa47e8a2fdc80fb91d57ecb5e1002063b8621346e4d8" Feb 02 17:22:02 crc kubenswrapper[4858]: I0202 17:22:02.766699 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b30ae748918a03768dc1fa47e8a2fdc80fb91d57ecb5e1002063b8621346e4d8"} err="failed to get container status \"b30ae748918a03768dc1fa47e8a2fdc80fb91d57ecb5e1002063b8621346e4d8\": rpc error: code = NotFound desc = could not find container \"b30ae748918a03768dc1fa47e8a2fdc80fb91d57ecb5e1002063b8621346e4d8\": container with ID starting with b30ae748918a03768dc1fa47e8a2fdc80fb91d57ecb5e1002063b8621346e4d8 not found: ID does not exist" Feb 02 17:22:02 crc kubenswrapper[4858]: I0202 17:22:02.847686 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c022725c-9725-4d5c-a703-5d61c931d9e8-installation-pull-secrets\") pod \"c022725c-9725-4d5c-a703-5d61c931d9e8\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " Feb 02 17:22:02 crc kubenswrapper[4858]: I0202 17:22:02.847901 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"c022725c-9725-4d5c-a703-5d61c931d9e8\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " Feb 02 17:22:02 crc kubenswrapper[4858]: I0202 17:22:02.847944 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c022725c-9725-4d5c-a703-5d61c931d9e8-bound-sa-token\") pod \"c022725c-9725-4d5c-a703-5d61c931d9e8\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " Feb 02 17:22:02 crc kubenswrapper[4858]: I0202 17:22:02.848002 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c022725c-9725-4d5c-a703-5d61c931d9e8-registry-tls\") pod \"c022725c-9725-4d5c-a703-5d61c931d9e8\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " Feb 02 17:22:02 crc kubenswrapper[4858]: I0202 17:22:02.848049 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c022725c-9725-4d5c-a703-5d61c931d9e8-registry-certificates\") pod \"c022725c-9725-4d5c-a703-5d61c931d9e8\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " Feb 02 17:22:02 crc kubenswrapper[4858]: I0202 17:22:02.848080 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c022725c-9725-4d5c-a703-5d61c931d9e8-trusted-ca\") pod \"c022725c-9725-4d5c-a703-5d61c931d9e8\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " Feb 02 17:22:02 crc kubenswrapper[4858]: I0202 17:22:02.848111 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6jg6\" (UniqueName: \"kubernetes.io/projected/c022725c-9725-4d5c-a703-5d61c931d9e8-kube-api-access-c6jg6\") pod \"c022725c-9725-4d5c-a703-5d61c931d9e8\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " Feb 02 17:22:02 crc kubenswrapper[4858]: I0202 17:22:02.848150 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c022725c-9725-4d5c-a703-5d61c931d9e8-ca-trust-extracted\") pod \"c022725c-9725-4d5c-a703-5d61c931d9e8\" (UID: \"c022725c-9725-4d5c-a703-5d61c931d9e8\") " Feb 02 17:22:02 crc kubenswrapper[4858]: I0202 17:22:02.849062 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c022725c-9725-4d5c-a703-5d61c931d9e8-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "c022725c-9725-4d5c-a703-5d61c931d9e8" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:22:02 crc kubenswrapper[4858]: I0202 17:22:02.849741 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c022725c-9725-4d5c-a703-5d61c931d9e8-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "c022725c-9725-4d5c-a703-5d61c931d9e8" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:22:02 crc kubenswrapper[4858]: I0202 17:22:02.853890 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c022725c-9725-4d5c-a703-5d61c931d9e8-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "c022725c-9725-4d5c-a703-5d61c931d9e8" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:22:02 crc kubenswrapper[4858]: I0202 17:22:02.854126 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c022725c-9725-4d5c-a703-5d61c931d9e8-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "c022725c-9725-4d5c-a703-5d61c931d9e8" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:22:02 crc kubenswrapper[4858]: I0202 17:22:02.861501 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c022725c-9725-4d5c-a703-5d61c931d9e8-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "c022725c-9725-4d5c-a703-5d61c931d9e8" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:22:02 crc kubenswrapper[4858]: I0202 17:22:02.862095 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c022725c-9725-4d5c-a703-5d61c931d9e8-kube-api-access-c6jg6" (OuterVolumeSpecName: "kube-api-access-c6jg6") pod "c022725c-9725-4d5c-a703-5d61c931d9e8" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8"). InnerVolumeSpecName "kube-api-access-c6jg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:22:02 crc kubenswrapper[4858]: I0202 17:22:02.867167 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "c022725c-9725-4d5c-a703-5d61c931d9e8" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 02 17:22:02 crc kubenswrapper[4858]: I0202 17:22:02.873945 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c022725c-9725-4d5c-a703-5d61c931d9e8-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "c022725c-9725-4d5c-a703-5d61c931d9e8" (UID: "c022725c-9725-4d5c-a703-5d61c931d9e8"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:22:02 crc kubenswrapper[4858]: I0202 17:22:02.950491 4858 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c022725c-9725-4d5c-a703-5d61c931d9e8-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 02 17:22:02 crc kubenswrapper[4858]: I0202 17:22:02.950577 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c022725c-9725-4d5c-a703-5d61c931d9e8-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 17:22:02 crc kubenswrapper[4858]: I0202 17:22:02.950606 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6jg6\" (UniqueName: \"kubernetes.io/projected/c022725c-9725-4d5c-a703-5d61c931d9e8-kube-api-access-c6jg6\") on node \"crc\" DevicePath \"\"" Feb 02 17:22:02 crc kubenswrapper[4858]: I0202 17:22:02.950633 4858 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c022725c-9725-4d5c-a703-5d61c931d9e8-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 02 17:22:02 crc kubenswrapper[4858]: I0202 17:22:02.950659 4858 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c022725c-9725-4d5c-a703-5d61c931d9e8-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 02 17:22:02 crc kubenswrapper[4858]: I0202 17:22:02.950678 4858 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c022725c-9725-4d5c-a703-5d61c931d9e8-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 17:22:02 crc kubenswrapper[4858]: I0202 17:22:02.950694 4858 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c022725c-9725-4d5c-a703-5d61c931d9e8-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 02 17:22:03 crc kubenswrapper[4858]: I0202 17:22:03.075613 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j5zlt"] Feb 02 17:22:03 crc kubenswrapper[4858]: I0202 17:22:03.088471 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j5zlt"] Feb 02 17:22:04 crc kubenswrapper[4858]: I0202 17:22:04.411090 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c022725c-9725-4d5c-a703-5d61c931d9e8" path="/var/lib/kubelet/pods/c022725c-9725-4d5c-a703-5d61c931d9e8/volumes" Feb 02 17:24:27 crc kubenswrapper[4858]: I0202 17:24:27.808200 4858 patch_prober.go:28] interesting pod/machine-config-daemon-lbvl2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 17:24:27 crc kubenswrapper[4858]: I0202 17:24:27.808841 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 17:24:34 crc kubenswrapper[4858]: I0202 17:24:34.146907 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-7kj75"] Feb 02 17:24:34 crc kubenswrapper[4858]: E0202 17:24:34.147581 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c022725c-9725-4d5c-a703-5d61c931d9e8" containerName="registry" Feb 02 17:24:34 crc kubenswrapper[4858]: I0202 17:24:34.147603 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c022725c-9725-4d5c-a703-5d61c931d9e8" containerName="registry" Feb 02 17:24:34 crc kubenswrapper[4858]: I0202 17:24:34.147772 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="c022725c-9725-4d5c-a703-5d61c931d9e8" containerName="registry" Feb 02 17:24:34 crc kubenswrapper[4858]: I0202 17:24:34.148308 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kj75" Feb 02 17:24:34 crc kubenswrapper[4858]: I0202 17:24:34.159726 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 02 17:24:34 crc kubenswrapper[4858]: I0202 17:24:34.160245 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 02 17:24:34 crc kubenswrapper[4858]: I0202 17:24:34.160536 4858 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-j9tln" Feb 02 17:24:34 crc kubenswrapper[4858]: I0202 17:24:34.165009 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-dzxc6"] Feb 02 17:24:34 crc kubenswrapper[4858]: I0202 17:24:34.166053 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-dzxc6" Feb 02 17:24:34 crc kubenswrapper[4858]: I0202 17:24:34.169932 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-7kj75"] Feb 02 17:24:34 crc kubenswrapper[4858]: I0202 17:24:34.172097 4858 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-bbth7" Feb 02 17:24:34 crc kubenswrapper[4858]: I0202 17:24:34.174014 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-bhlzx"] Feb 02 17:24:34 crc kubenswrapper[4858]: I0202 17:24:34.175048 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-bhlzx" Feb 02 17:24:34 crc kubenswrapper[4858]: I0202 17:24:34.176999 4858 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-85944" Feb 02 17:24:34 crc kubenswrapper[4858]: I0202 17:24:34.189361 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-dzxc6"] Feb 02 17:24:34 crc kubenswrapper[4858]: I0202 17:24:34.194882 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-bhlzx"] Feb 02 17:24:34 crc kubenswrapper[4858]: I0202 17:24:34.325903 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwhww\" (UniqueName: \"kubernetes.io/projected/b09a6151-2124-4f22-b226-a1ae36869433-kube-api-access-wwhww\") pod \"cert-manager-cainjector-cf98fcc89-7kj75\" (UID: \"b09a6151-2124-4f22-b226-a1ae36869433\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kj75" Feb 02 17:24:34 crc kubenswrapper[4858]: I0202 17:24:34.325988 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlz5w\" (UniqueName: \"kubernetes.io/projected/bc586ae0-865f-490b-8ca0-bb157144af30-kube-api-access-jlz5w\") pod \"cert-manager-858654f9db-dzxc6\" (UID: \"bc586ae0-865f-490b-8ca0-bb157144af30\") " pod="cert-manager/cert-manager-858654f9db-dzxc6" Feb 02 17:24:34 crc kubenswrapper[4858]: I0202 17:24:34.326035 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4g9t\" (UniqueName: \"kubernetes.io/projected/786fe412-07f2-458a-bb89-f77dc747524c-kube-api-access-z4g9t\") pod \"cert-manager-webhook-687f57d79b-bhlzx\" (UID: \"786fe412-07f2-458a-bb89-f77dc747524c\") " pod="cert-manager/cert-manager-webhook-687f57d79b-bhlzx" Feb 02 17:24:34 crc kubenswrapper[4858]: I0202 17:24:34.427752 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwhww\" (UniqueName: \"kubernetes.io/projected/b09a6151-2124-4f22-b226-a1ae36869433-kube-api-access-wwhww\") pod \"cert-manager-cainjector-cf98fcc89-7kj75\" (UID: \"b09a6151-2124-4f22-b226-a1ae36869433\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kj75" Feb 02 17:24:34 crc kubenswrapper[4858]: I0202 17:24:34.427841 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlz5w\" (UniqueName: \"kubernetes.io/projected/bc586ae0-865f-490b-8ca0-bb157144af30-kube-api-access-jlz5w\") pod \"cert-manager-858654f9db-dzxc6\" (UID: \"bc586ae0-865f-490b-8ca0-bb157144af30\") " pod="cert-manager/cert-manager-858654f9db-dzxc6" Feb 02 17:24:34 crc kubenswrapper[4858]: I0202 17:24:34.427889 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4g9t\" (UniqueName: \"kubernetes.io/projected/786fe412-07f2-458a-bb89-f77dc747524c-kube-api-access-z4g9t\") pod \"cert-manager-webhook-687f57d79b-bhlzx\" (UID: \"786fe412-07f2-458a-bb89-f77dc747524c\") " pod="cert-manager/cert-manager-webhook-687f57d79b-bhlzx" Feb 02 17:24:34 crc kubenswrapper[4858]: I0202 17:24:34.450077 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4g9t\" (UniqueName: \"kubernetes.io/projected/786fe412-07f2-458a-bb89-f77dc747524c-kube-api-access-z4g9t\") pod \"cert-manager-webhook-687f57d79b-bhlzx\" (UID: \"786fe412-07f2-458a-bb89-f77dc747524c\") " pod="cert-manager/cert-manager-webhook-687f57d79b-bhlzx" Feb 02 17:24:34 crc kubenswrapper[4858]: I0202 17:24:34.452613 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwhww\" (UniqueName: \"kubernetes.io/projected/b09a6151-2124-4f22-b226-a1ae36869433-kube-api-access-wwhww\") pod \"cert-manager-cainjector-cf98fcc89-7kj75\" (UID: \"b09a6151-2124-4f22-b226-a1ae36869433\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kj75" Feb 02 17:24:34 crc kubenswrapper[4858]: I0202 17:24:34.458256 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlz5w\" (UniqueName: \"kubernetes.io/projected/bc586ae0-865f-490b-8ca0-bb157144af30-kube-api-access-jlz5w\") pod \"cert-manager-858654f9db-dzxc6\" (UID: \"bc586ae0-865f-490b-8ca0-bb157144af30\") " pod="cert-manager/cert-manager-858654f9db-dzxc6" Feb 02 17:24:34 crc kubenswrapper[4858]: I0202 17:24:34.484129 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kj75" Feb 02 17:24:34 crc kubenswrapper[4858]: I0202 17:24:34.499508 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-dzxc6" Feb 02 17:24:34 crc kubenswrapper[4858]: I0202 17:24:34.507569 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-bhlzx" Feb 02 17:24:34 crc kubenswrapper[4858]: I0202 17:24:34.739889 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-dzxc6"] Feb 02 17:24:34 crc kubenswrapper[4858]: I0202 17:24:34.752749 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 17:24:34 crc kubenswrapper[4858]: I0202 17:24:34.779388 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-7kj75"] Feb 02 17:24:34 crc kubenswrapper[4858]: W0202 17:24:34.783309 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb09a6151_2124_4f22_b226_a1ae36869433.slice/crio-1c7b920c39327e25331eafd91cf6af0dfd8f71740fe38a3d260782d7f21fcf72 WatchSource:0}: Error finding container 1c7b920c39327e25331eafd91cf6af0dfd8f71740fe38a3d260782d7f21fcf72: Status 404 returned error can't find the container with id 1c7b920c39327e25331eafd91cf6af0dfd8f71740fe38a3d260782d7f21fcf72 Feb 02 17:24:34 crc kubenswrapper[4858]: I0202 17:24:34.812437 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-bhlzx"] Feb 02 17:24:34 crc kubenswrapper[4858]: W0202 17:24:34.815924 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod786fe412_07f2_458a_bb89_f77dc747524c.slice/crio-65ebb58c368e897ddcf8c720a1b1995339c8123228a84509c8e261e3c4b80fb9 WatchSource:0}: Error finding container 65ebb58c368e897ddcf8c720a1b1995339c8123228a84509c8e261e3c4b80fb9: Status 404 returned error can't find the container with id 65ebb58c368e897ddcf8c720a1b1995339c8123228a84509c8e261e3c4b80fb9 Feb 02 17:24:35 crc kubenswrapper[4858]: I0202 17:24:35.712865 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kj75" event={"ID":"b09a6151-2124-4f22-b226-a1ae36869433","Type":"ContainerStarted","Data":"1c7b920c39327e25331eafd91cf6af0dfd8f71740fe38a3d260782d7f21fcf72"} Feb 02 17:24:35 crc kubenswrapper[4858]: I0202 17:24:35.713796 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-dzxc6" event={"ID":"bc586ae0-865f-490b-8ca0-bb157144af30","Type":"ContainerStarted","Data":"ada0bca280369ee31cb3d4d2f928ce9038a9c64b27ff2ec1b6df2627912a273f"} Feb 02 17:24:35 crc kubenswrapper[4858]: I0202 17:24:35.714705 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-bhlzx" event={"ID":"786fe412-07f2-458a-bb89-f77dc747524c","Type":"ContainerStarted","Data":"65ebb58c368e897ddcf8c720a1b1995339c8123228a84509c8e261e3c4b80fb9"} Feb 02 17:24:39 crc kubenswrapper[4858]: I0202 17:24:39.735149 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kj75" event={"ID":"b09a6151-2124-4f22-b226-a1ae36869433","Type":"ContainerStarted","Data":"b1774a5f3a97488df2ba95015875d489130198a6979c488b668c040fb8865d51"} Feb 02 17:24:39 crc kubenswrapper[4858]: I0202 17:24:39.738524 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-dzxc6" event={"ID":"bc586ae0-865f-490b-8ca0-bb157144af30","Type":"ContainerStarted","Data":"21e54b715792f020291d1d5137bd4c21eccf0bfa4fc712a7197654a1dac7bd4a"} Feb 02 17:24:39 crc kubenswrapper[4858]: I0202 17:24:39.743430 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-bhlzx" event={"ID":"786fe412-07f2-458a-bb89-f77dc747524c","Type":"ContainerStarted","Data":"f845904ad17ac4e6c2f4fb075ed68df4b0d2be26b9ada726c265ac248641fc7e"} Feb 02 17:24:39 crc kubenswrapper[4858]: I0202 17:24:39.743605 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-bhlzx" Feb 02 17:24:39 crc kubenswrapper[4858]: I0202 17:24:39.753761 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kj75" podStartSLOduration=2.006266826 podStartE2EDuration="5.753735543s" podCreationTimestamp="2026-02-02 17:24:34 +0000 UTC" firstStartedPulling="2026-02-02 17:24:34.786124154 +0000 UTC m=+575.938539419" lastFinishedPulling="2026-02-02 17:24:38.533592861 +0000 UTC m=+579.686008136" observedRunningTime="2026-02-02 17:24:39.748084567 +0000 UTC m=+580.900499862" watchObservedRunningTime="2026-02-02 17:24:39.753735543 +0000 UTC m=+580.906150808" Feb 02 17:24:39 crc kubenswrapper[4858]: I0202 17:24:39.765023 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-bhlzx" podStartSLOduration=1.976291828 podStartE2EDuration="5.765000473s" podCreationTimestamp="2026-02-02 17:24:34 +0000 UTC" firstStartedPulling="2026-02-02 17:24:34.817932372 +0000 UTC m=+575.970347637" lastFinishedPulling="2026-02-02 17:24:38.606641017 +0000 UTC m=+579.759056282" observedRunningTime="2026-02-02 17:24:39.763652306 +0000 UTC m=+580.916067581" watchObservedRunningTime="2026-02-02 17:24:39.765000473 +0000 UTC m=+580.917415738" Feb 02 17:24:39 crc kubenswrapper[4858]: I0202 17:24:39.783144 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-dzxc6" podStartSLOduration=2.002548803 podStartE2EDuration="5.783126574s" podCreationTimestamp="2026-02-02 17:24:34 +0000 UTC" firstStartedPulling="2026-02-02 17:24:34.752415934 +0000 UTC m=+575.904831199" lastFinishedPulling="2026-02-02 17:24:38.532993705 +0000 UTC m=+579.685408970" observedRunningTime="2026-02-02 17:24:39.777789586 +0000 UTC m=+580.930204861" watchObservedRunningTime="2026-02-02 17:24:39.783126574 +0000 UTC m=+580.935541859" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.224226 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wkm4w"] Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.226047 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="ovn-controller" containerID="cri-o://4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4" gracePeriod=30 Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.226152 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="nbdb" containerID="cri-o://78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d" gracePeriod=30 Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.226263 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="northd" containerID="cri-o://9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839" gracePeriod=30 Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.226255 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99" gracePeriod=30 Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.226317 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="kube-rbac-proxy-node" containerID="cri-o://19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9" gracePeriod=30 Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.226274 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="sbdb" containerID="cri-o://e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad" gracePeriod=30 Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.226434 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="ovn-acl-logging" containerID="cri-o://740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab" gracePeriod=30 Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.266343 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="ovnkube-controller" containerID="cri-o://f99bb1927da4a834c76c5c179c4171aa7eec841da92a7caaec1e4451ae36e321" gracePeriod=30 Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.510898 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-bhlzx" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.577911 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wkm4w_ce405d19-c944-4a11-8195-bca9289b8d73/ovnkube-controller/3.log" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.579887 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wkm4w_ce405d19-c944-4a11-8195-bca9289b8d73/ovn-acl-logging/0.log" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.580327 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wkm4w_ce405d19-c944-4a11-8195-bca9289b8d73/ovn-controller/0.log" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.580815 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.635810 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gj9br"] Feb 02 17:24:44 crc kubenswrapper[4858]: E0202 17:24:44.636169 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="sbdb" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.636201 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="sbdb" Feb 02 17:24:44 crc kubenswrapper[4858]: E0202 17:24:44.636228 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="ovnkube-controller" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.636241 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="ovnkube-controller" Feb 02 17:24:44 crc kubenswrapper[4858]: E0202 17:24:44.636253 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="kubecfg-setup" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.636266 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="kubecfg-setup" Feb 02 17:24:44 crc kubenswrapper[4858]: E0202 17:24:44.636290 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="northd" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.636302 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="northd" Feb 02 17:24:44 crc kubenswrapper[4858]: E0202 17:24:44.636315 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.636328 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 17:24:44 crc kubenswrapper[4858]: E0202 17:24:44.636348 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="ovn-controller" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.636360 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="ovn-controller" Feb 02 17:24:44 crc kubenswrapper[4858]: E0202 17:24:44.636375 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="ovnkube-controller" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.636388 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="ovnkube-controller" Feb 02 17:24:44 crc kubenswrapper[4858]: E0202 17:24:44.636403 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="ovnkube-controller" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.636415 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="ovnkube-controller" Feb 02 17:24:44 crc kubenswrapper[4858]: E0202 17:24:44.636431 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="nbdb" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.636443 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="nbdb" Feb 02 17:24:44 crc kubenswrapper[4858]: E0202 17:24:44.636462 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="ovn-acl-logging" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.636473 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="ovn-acl-logging" Feb 02 17:24:44 crc kubenswrapper[4858]: E0202 17:24:44.636490 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="ovnkube-controller" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.636502 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="ovnkube-controller" Feb 02 17:24:44 crc kubenswrapper[4858]: E0202 17:24:44.636518 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="kube-rbac-proxy-node" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.636531 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="kube-rbac-proxy-node" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.636697 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="kube-rbac-proxy-node" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.636724 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="ovnkube-controller" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.636738 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="northd" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.636752 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="nbdb" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.636767 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="ovnkube-controller" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.636782 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="ovnkube-controller" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.636797 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="sbdb" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.636812 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.636827 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="ovn-controller" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.636840 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="ovn-acl-logging" Feb 02 17:24:44 crc kubenswrapper[4858]: E0202 17:24:44.637032 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="ovnkube-controller" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.637047 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="ovnkube-controller" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.637242 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="ovnkube-controller" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.637273 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" containerName="ovnkube-controller" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.640494 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.678212 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-etc-openvswitch\") pod \"ce405d19-c944-4a11-8195-bca9289b8d73\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.678260 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-host-slash\") pod \"ce405d19-c944-4a11-8195-bca9289b8d73\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.678308 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-run-ovn\") pod \"ce405d19-c944-4a11-8195-bca9289b8d73\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.678337 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csh5j\" (UniqueName: \"kubernetes.io/projected/ce405d19-c944-4a11-8195-bca9289b8d73-kube-api-access-csh5j\") pod \"ce405d19-c944-4a11-8195-bca9289b8d73\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.678359 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-var-lib-openvswitch\") pod \"ce405d19-c944-4a11-8195-bca9289b8d73\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.678382 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ce405d19-c944-4a11-8195-bca9289b8d73-ovnkube-config\") pod \"ce405d19-c944-4a11-8195-bca9289b8d73\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.678325 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "ce405d19-c944-4a11-8195-bca9289b8d73" (UID: "ce405d19-c944-4a11-8195-bca9289b8d73"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.678388 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-host-slash" (OuterVolumeSpecName: "host-slash") pod "ce405d19-c944-4a11-8195-bca9289b8d73" (UID: "ce405d19-c944-4a11-8195-bca9289b8d73"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.678360 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "ce405d19-c944-4a11-8195-bca9289b8d73" (UID: "ce405d19-c944-4a11-8195-bca9289b8d73"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.678405 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ce405d19-c944-4a11-8195-bca9289b8d73-env-overrides\") pod \"ce405d19-c944-4a11-8195-bca9289b8d73\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.678485 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-log-socket\") pod \"ce405d19-c944-4a11-8195-bca9289b8d73\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.678507 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-run-systemd\") pod \"ce405d19-c944-4a11-8195-bca9289b8d73\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.678569 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ce405d19-c944-4a11-8195-bca9289b8d73-ovn-node-metrics-cert\") pod \"ce405d19-c944-4a11-8195-bca9289b8d73\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.678597 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-host-cni-bin\") pod \"ce405d19-c944-4a11-8195-bca9289b8d73\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.678618 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ce405d19-c944-4a11-8195-bca9289b8d73-ovnkube-script-lib\") pod \"ce405d19-c944-4a11-8195-bca9289b8d73\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.678634 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-node-log\") pod \"ce405d19-c944-4a11-8195-bca9289b8d73\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.678652 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ce405d19-c944-4a11-8195-bca9289b8d73\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.678676 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-host-run-netns\") pod \"ce405d19-c944-4a11-8195-bca9289b8d73\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.678694 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-host-kubelet\") pod \"ce405d19-c944-4a11-8195-bca9289b8d73\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.678713 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-host-run-ovn-kubernetes\") pod \"ce405d19-c944-4a11-8195-bca9289b8d73\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.678736 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-systemd-units\") pod \"ce405d19-c944-4a11-8195-bca9289b8d73\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.678763 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-run-openvswitch\") pod \"ce405d19-c944-4a11-8195-bca9289b8d73\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.678780 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-host-cni-netd\") pod \"ce405d19-c944-4a11-8195-bca9289b8d73\" (UID: \"ce405d19-c944-4a11-8195-bca9289b8d73\") " Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.678897 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce405d19-c944-4a11-8195-bca9289b8d73-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "ce405d19-c944-4a11-8195-bca9289b8d73" (UID: "ce405d19-c944-4a11-8195-bca9289b8d73"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.678922 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-host-cni-netd\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.678940 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "ce405d19-c944-4a11-8195-bca9289b8d73" (UID: "ce405d19-c944-4a11-8195-bca9289b8d73"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.678952 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-host-cni-bin\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.678997 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-run-ovn\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.679021 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-host-run-netns\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.679044 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-etc-openvswitch\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.679061 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-host-run-ovn-kubernetes\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.679081 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-env-overrides\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.679103 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.679122 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-var-lib-openvswitch\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.679146 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-ovn-node-metrics-cert\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.679175 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-run-openvswitch\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.679197 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-node-log\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.679218 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-log-socket\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.679236 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-ovnkube-script-lib\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.679258 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vx7w\" (UniqueName: \"kubernetes.io/projected/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-kube-api-access-9vx7w\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.679286 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-host-slash\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.679330 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-systemd-units\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.679345 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-ovnkube-config\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.679369 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce405d19-c944-4a11-8195-bca9289b8d73-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "ce405d19-c944-4a11-8195-bca9289b8d73" (UID: "ce405d19-c944-4a11-8195-bca9289b8d73"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.679379 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-run-systemd\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.679397 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-host-kubelet\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.679435 4858 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.679445 4858 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-host-slash\") on node \"crc\" DevicePath \"\"" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.679453 4858 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.679462 4858 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.679471 4858 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ce405d19-c944-4a11-8195-bca9289b8d73-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.679479 4858 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ce405d19-c944-4a11-8195-bca9289b8d73-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.679513 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-log-socket" (OuterVolumeSpecName: "log-socket") pod "ce405d19-c944-4a11-8195-bca9289b8d73" (UID: "ce405d19-c944-4a11-8195-bca9289b8d73"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.682601 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "ce405d19-c944-4a11-8195-bca9289b8d73" (UID: "ce405d19-c944-4a11-8195-bca9289b8d73"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.682691 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "ce405d19-c944-4a11-8195-bca9289b8d73" (UID: "ce405d19-c944-4a11-8195-bca9289b8d73"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.682820 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "ce405d19-c944-4a11-8195-bca9289b8d73" (UID: "ce405d19-c944-4a11-8195-bca9289b8d73"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.682904 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-node-log" (OuterVolumeSpecName: "node-log") pod "ce405d19-c944-4a11-8195-bca9289b8d73" (UID: "ce405d19-c944-4a11-8195-bca9289b8d73"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.682991 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "ce405d19-c944-4a11-8195-bca9289b8d73" (UID: "ce405d19-c944-4a11-8195-bca9289b8d73"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.683067 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "ce405d19-c944-4a11-8195-bca9289b8d73" (UID: "ce405d19-c944-4a11-8195-bca9289b8d73"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.683130 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "ce405d19-c944-4a11-8195-bca9289b8d73" (UID: "ce405d19-c944-4a11-8195-bca9289b8d73"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.683333 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "ce405d19-c944-4a11-8195-bca9289b8d73" (UID: "ce405d19-c944-4a11-8195-bca9289b8d73"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.683432 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "ce405d19-c944-4a11-8195-bca9289b8d73" (UID: "ce405d19-c944-4a11-8195-bca9289b8d73"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.683679 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce405d19-c944-4a11-8195-bca9289b8d73-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "ce405d19-c944-4a11-8195-bca9289b8d73" (UID: "ce405d19-c944-4a11-8195-bca9289b8d73"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.686325 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce405d19-c944-4a11-8195-bca9289b8d73-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "ce405d19-c944-4a11-8195-bca9289b8d73" (UID: "ce405d19-c944-4a11-8195-bca9289b8d73"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.686732 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce405d19-c944-4a11-8195-bca9289b8d73-kube-api-access-csh5j" (OuterVolumeSpecName: "kube-api-access-csh5j") pod "ce405d19-c944-4a11-8195-bca9289b8d73" (UID: "ce405d19-c944-4a11-8195-bca9289b8d73"). InnerVolumeSpecName "kube-api-access-csh5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.693031 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "ce405d19-c944-4a11-8195-bca9289b8d73" (UID: "ce405d19-c944-4a11-8195-bca9289b8d73"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.778907 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9szlc_4bc7963e-1bdc-4038-805e-bd72fc217a13/kube-multus/2.log" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.779575 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9szlc_4bc7963e-1bdc-4038-805e-bd72fc217a13/kube-multus/1.log" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.779627 4858 generic.go:334] "Generic (PLEG): container finished" podID="4bc7963e-1bdc-4038-805e-bd72fc217a13" containerID="dc80d934773e7a1085767db5e7b28c615ef7491dfabb021de55cba2328bca076" exitCode=2 Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.779666 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9szlc" event={"ID":"4bc7963e-1bdc-4038-805e-bd72fc217a13","Type":"ContainerDied","Data":"dc80d934773e7a1085767db5e7b28c615ef7491dfabb021de55cba2328bca076"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.779753 4858 scope.go:117] "RemoveContainer" containerID="485b3125d8343c8afd6e5d3b756e0a924c75da1d89c5e699ff825a8b46957bb7" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.780266 4858 scope.go:117] "RemoveContainer" containerID="dc80d934773e7a1085767db5e7b28c615ef7491dfabb021de55cba2328bca076" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.780499 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-ovn-node-metrics-cert\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.780699 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-run-openvswitch\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.780833 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-node-log\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.780843 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-run-openvswitch\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: E0202 17:24:44.780582 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-9szlc_openshift-multus(4bc7963e-1bdc-4038-805e-bd72fc217a13)\"" pod="openshift-multus/multus-9szlc" podUID="4bc7963e-1bdc-4038-805e-bd72fc217a13" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.780889 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-node-log\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.780962 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-log-socket\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.781098 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-ovnkube-script-lib\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.781129 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vx7w\" (UniqueName: \"kubernetes.io/projected/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-kube-api-access-9vx7w\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.781173 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-host-slash\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.781223 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-systemd-units\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.781254 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-ovnkube-config\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.781300 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-run-systemd\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.781327 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-host-kubelet\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.781368 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-host-cni-netd\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.781397 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-host-cni-bin\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.781427 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-run-ovn\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.781464 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-host-run-netns\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.781490 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-etc-openvswitch\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.781519 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-host-run-ovn-kubernetes\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.781546 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-env-overrides\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.781580 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.781608 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-var-lib-openvswitch\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.781650 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-host-kubelet\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.781664 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csh5j\" (UniqueName: \"kubernetes.io/projected/ce405d19-c944-4a11-8195-bca9289b8d73-kube-api-access-csh5j\") on node \"crc\" DevicePath \"\"" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.781723 4858 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-log-socket\") on node \"crc\" DevicePath \"\"" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.781743 4858 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.781749 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-host-cni-bin\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.781762 4858 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ce405d19-c944-4a11-8195-bca9289b8d73-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.781780 4858 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.781793 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-run-ovn\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.781796 4858 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ce405d19-c944-4a11-8195-bca9289b8d73-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.781830 4858 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-node-log\") on node \"crc\" DevicePath \"\"" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.781845 4858 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.781861 4858 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.781874 4858 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.781886 4858 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.781900 4858 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.781913 4858 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.781925 4858 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ce405d19-c944-4a11-8195-bca9289b8d73-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.781721 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-host-cni-netd\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.782017 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-host-run-netns\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.782054 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-etc-openvswitch\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.782093 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-host-run-ovn-kubernetes\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.782171 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-systemd-units\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.782480 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-host-slash\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.781695 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-var-lib-openvswitch\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.782579 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.782696 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-env-overrides\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.783449 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-ovnkube-config\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.783807 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-run-systemd\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.787736 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-ovnkube-script-lib\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.788367 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-ovn-node-metrics-cert\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.788389 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-log-socket\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.790823 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wkm4w_ce405d19-c944-4a11-8195-bca9289b8d73/ovnkube-controller/3.log" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.809208 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wkm4w_ce405d19-c944-4a11-8195-bca9289b8d73/ovn-acl-logging/0.log" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.810830 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-wkm4w_ce405d19-c944-4a11-8195-bca9289b8d73/ovn-controller/0.log" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.811759 4858 generic.go:334] "Generic (PLEG): container finished" podID="ce405d19-c944-4a11-8195-bca9289b8d73" containerID="f99bb1927da4a834c76c5c179c4171aa7eec841da92a7caaec1e4451ae36e321" exitCode=0 Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.811793 4858 generic.go:334] "Generic (PLEG): container finished" podID="ce405d19-c944-4a11-8195-bca9289b8d73" containerID="e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad" exitCode=0 Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.811803 4858 generic.go:334] "Generic (PLEG): container finished" podID="ce405d19-c944-4a11-8195-bca9289b8d73" containerID="78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d" exitCode=0 Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.811812 4858 generic.go:334] "Generic (PLEG): container finished" podID="ce405d19-c944-4a11-8195-bca9289b8d73" containerID="9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839" exitCode=0 Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.811820 4858 generic.go:334] "Generic (PLEG): container finished" podID="ce405d19-c944-4a11-8195-bca9289b8d73" containerID="143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99" exitCode=0 Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.811827 4858 generic.go:334] "Generic (PLEG): container finished" podID="ce405d19-c944-4a11-8195-bca9289b8d73" containerID="19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9" exitCode=0 Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.811835 4858 generic.go:334] "Generic (PLEG): container finished" podID="ce405d19-c944-4a11-8195-bca9289b8d73" containerID="740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab" exitCode=143 Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.811846 4858 generic.go:334] "Generic (PLEG): container finished" podID="ce405d19-c944-4a11-8195-bca9289b8d73" containerID="4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4" exitCode=143 Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.811863 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" event={"ID":"ce405d19-c944-4a11-8195-bca9289b8d73","Type":"ContainerDied","Data":"f99bb1927da4a834c76c5c179c4171aa7eec841da92a7caaec1e4451ae36e321"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.811902 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.811922 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" event={"ID":"ce405d19-c944-4a11-8195-bca9289b8d73","Type":"ContainerDied","Data":"e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.811948 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" event={"ID":"ce405d19-c944-4a11-8195-bca9289b8d73","Type":"ContainerDied","Data":"78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.811968 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" event={"ID":"ce405d19-c944-4a11-8195-bca9289b8d73","Type":"ContainerDied","Data":"9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812013 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" event={"ID":"ce405d19-c944-4a11-8195-bca9289b8d73","Type":"ContainerDied","Data":"143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812040 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" event={"ID":"ce405d19-c944-4a11-8195-bca9289b8d73","Type":"ContainerDied","Data":"19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812064 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f99bb1927da4a834c76c5c179c4171aa7eec841da92a7caaec1e4451ae36e321"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812090 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812106 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812121 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812135 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812149 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812163 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812175 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812185 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812196 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812213 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" event={"ID":"ce405d19-c944-4a11-8195-bca9289b8d73","Type":"ContainerDied","Data":"740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812231 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f99bb1927da4a834c76c5c179c4171aa7eec841da92a7caaec1e4451ae36e321"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812244 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812254 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812265 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812275 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812286 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812296 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812307 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812318 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812329 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812344 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" event={"ID":"ce405d19-c944-4a11-8195-bca9289b8d73","Type":"ContainerDied","Data":"4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812359 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f99bb1927da4a834c76c5c179c4171aa7eec841da92a7caaec1e4451ae36e321"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812371 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812382 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812393 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812403 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812413 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812423 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812436 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812446 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812457 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812471 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-wkm4w" event={"ID":"ce405d19-c944-4a11-8195-bca9289b8d73","Type":"ContainerDied","Data":"e6388e015b4efe0b7d1369e0474426d8095e524d0f06ce875a94c2425b98a739"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812487 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f99bb1927da4a834c76c5c179c4171aa7eec841da92a7caaec1e4451ae36e321"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812499 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812510 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812521 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812532 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812542 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812554 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812564 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812574 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.812585 4858 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4"} Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.820631 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vx7w\" (UniqueName: \"kubernetes.io/projected/e3476f0e-4b32-4b9d-a487-f3c8e256b7d0-kube-api-access-9vx7w\") pod \"ovnkube-node-gj9br\" (UID: \"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.832433 4858 scope.go:117] "RemoveContainer" containerID="f99bb1927da4a834c76c5c179c4171aa7eec841da92a7caaec1e4451ae36e321" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.861213 4858 scope.go:117] "RemoveContainer" containerID="b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.859945 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wkm4w"] Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.866927 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-wkm4w"] Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.880232 4858 scope.go:117] "RemoveContainer" containerID="e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.892947 4858 scope.go:117] "RemoveContainer" containerID="78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.909098 4858 scope.go:117] "RemoveContainer" containerID="9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.921072 4858 scope.go:117] "RemoveContainer" containerID="143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.935016 4858 scope.go:117] "RemoveContainer" containerID="19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.955705 4858 scope.go:117] "RemoveContainer" containerID="740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.955874 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.974117 4858 scope.go:117] "RemoveContainer" containerID="4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4" Feb 02 17:24:44 crc kubenswrapper[4858]: I0202 17:24:44.993612 4858 scope.go:117] "RemoveContainer" containerID="d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.009317 4858 scope.go:117] "RemoveContainer" containerID="f99bb1927da4a834c76c5c179c4171aa7eec841da92a7caaec1e4451ae36e321" Feb 02 17:24:45 crc kubenswrapper[4858]: E0202 17:24:45.010226 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f99bb1927da4a834c76c5c179c4171aa7eec841da92a7caaec1e4451ae36e321\": container with ID starting with f99bb1927da4a834c76c5c179c4171aa7eec841da92a7caaec1e4451ae36e321 not found: ID does not exist" containerID="f99bb1927da4a834c76c5c179c4171aa7eec841da92a7caaec1e4451ae36e321" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.010265 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f99bb1927da4a834c76c5c179c4171aa7eec841da92a7caaec1e4451ae36e321"} err="failed to get container status \"f99bb1927da4a834c76c5c179c4171aa7eec841da92a7caaec1e4451ae36e321\": rpc error: code = NotFound desc = could not find container \"f99bb1927da4a834c76c5c179c4171aa7eec841da92a7caaec1e4451ae36e321\": container with ID starting with f99bb1927da4a834c76c5c179c4171aa7eec841da92a7caaec1e4451ae36e321 not found: ID does not exist" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.010293 4858 scope.go:117] "RemoveContainer" containerID="b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6" Feb 02 17:24:45 crc kubenswrapper[4858]: E0202 17:24:45.010774 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6\": container with ID starting with b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6 not found: ID does not exist" containerID="b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.010827 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6"} err="failed to get container status \"b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6\": rpc error: code = NotFound desc = could not find container \"b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6\": container with ID starting with b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6 not found: ID does not exist" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.010866 4858 scope.go:117] "RemoveContainer" containerID="e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad" Feb 02 17:24:45 crc kubenswrapper[4858]: E0202 17:24:45.011472 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad\": container with ID starting with e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad not found: ID does not exist" containerID="e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.011512 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad"} err="failed to get container status \"e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad\": rpc error: code = NotFound desc = could not find container \"e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad\": container with ID starting with e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad not found: ID does not exist" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.011536 4858 scope.go:117] "RemoveContainer" containerID="78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d" Feb 02 17:24:45 crc kubenswrapper[4858]: E0202 17:24:45.011917 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d\": container with ID starting with 78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d not found: ID does not exist" containerID="78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.011961 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d"} err="failed to get container status \"78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d\": rpc error: code = NotFound desc = could not find container \"78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d\": container with ID starting with 78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d not found: ID does not exist" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.012016 4858 scope.go:117] "RemoveContainer" containerID="9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839" Feb 02 17:24:45 crc kubenswrapper[4858]: E0202 17:24:45.012408 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839\": container with ID starting with 9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839 not found: ID does not exist" containerID="9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.012460 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839"} err="failed to get container status \"9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839\": rpc error: code = NotFound desc = could not find container \"9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839\": container with ID starting with 9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839 not found: ID does not exist" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.012488 4858 scope.go:117] "RemoveContainer" containerID="143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99" Feb 02 17:24:45 crc kubenswrapper[4858]: E0202 17:24:45.012921 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99\": container with ID starting with 143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99 not found: ID does not exist" containerID="143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.012946 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99"} err="failed to get container status \"143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99\": rpc error: code = NotFound desc = could not find container \"143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99\": container with ID starting with 143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99 not found: ID does not exist" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.012967 4858 scope.go:117] "RemoveContainer" containerID="19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9" Feb 02 17:24:45 crc kubenswrapper[4858]: E0202 17:24:45.013434 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9\": container with ID starting with 19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9 not found: ID does not exist" containerID="19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.013459 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9"} err="failed to get container status \"19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9\": rpc error: code = NotFound desc = could not find container \"19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9\": container with ID starting with 19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9 not found: ID does not exist" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.013473 4858 scope.go:117] "RemoveContainer" containerID="740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab" Feb 02 17:24:45 crc kubenswrapper[4858]: E0202 17:24:45.013737 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab\": container with ID starting with 740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab not found: ID does not exist" containerID="740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.013821 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab"} err="failed to get container status \"740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab\": rpc error: code = NotFound desc = could not find container \"740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab\": container with ID starting with 740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab not found: ID does not exist" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.013858 4858 scope.go:117] "RemoveContainer" containerID="4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4" Feb 02 17:24:45 crc kubenswrapper[4858]: E0202 17:24:45.014263 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4\": container with ID starting with 4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4 not found: ID does not exist" containerID="4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.014285 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4"} err="failed to get container status \"4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4\": rpc error: code = NotFound desc = could not find container \"4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4\": container with ID starting with 4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4 not found: ID does not exist" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.014300 4858 scope.go:117] "RemoveContainer" containerID="d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4" Feb 02 17:24:45 crc kubenswrapper[4858]: E0202 17:24:45.014874 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\": container with ID starting with d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4 not found: ID does not exist" containerID="d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.014920 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4"} err="failed to get container status \"d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\": rpc error: code = NotFound desc = could not find container \"d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\": container with ID starting with d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4 not found: ID does not exist" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.015023 4858 scope.go:117] "RemoveContainer" containerID="f99bb1927da4a834c76c5c179c4171aa7eec841da92a7caaec1e4451ae36e321" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.015466 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f99bb1927da4a834c76c5c179c4171aa7eec841da92a7caaec1e4451ae36e321"} err="failed to get container status \"f99bb1927da4a834c76c5c179c4171aa7eec841da92a7caaec1e4451ae36e321\": rpc error: code = NotFound desc = could not find container \"f99bb1927da4a834c76c5c179c4171aa7eec841da92a7caaec1e4451ae36e321\": container with ID starting with f99bb1927da4a834c76c5c179c4171aa7eec841da92a7caaec1e4451ae36e321 not found: ID does not exist" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.015516 4858 scope.go:117] "RemoveContainer" containerID="b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.015890 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6"} err="failed to get container status \"b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6\": rpc error: code = NotFound desc = could not find container \"b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6\": container with ID starting with b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6 not found: ID does not exist" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.015917 4858 scope.go:117] "RemoveContainer" containerID="e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.016367 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad"} err="failed to get container status \"e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad\": rpc error: code = NotFound desc = could not find container \"e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad\": container with ID starting with e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad not found: ID does not exist" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.016406 4858 scope.go:117] "RemoveContainer" containerID="78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.016816 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d"} err="failed to get container status \"78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d\": rpc error: code = NotFound desc = could not find container \"78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d\": container with ID starting with 78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d not found: ID does not exist" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.016861 4858 scope.go:117] "RemoveContainer" containerID="9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.017580 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839"} err="failed to get container status \"9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839\": rpc error: code = NotFound desc = could not find container \"9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839\": container with ID starting with 9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839 not found: ID does not exist" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.017629 4858 scope.go:117] "RemoveContainer" containerID="143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.018043 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99"} err="failed to get container status \"143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99\": rpc error: code = NotFound desc = could not find container \"143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99\": container with ID starting with 143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99 not found: ID does not exist" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.018081 4858 scope.go:117] "RemoveContainer" containerID="19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.018541 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9"} err="failed to get container status \"19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9\": rpc error: code = NotFound desc = could not find container \"19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9\": container with ID starting with 19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9 not found: ID does not exist" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.018563 4858 scope.go:117] "RemoveContainer" containerID="740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.018952 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab"} err="failed to get container status \"740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab\": rpc error: code = NotFound desc = could not find container \"740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab\": container with ID starting with 740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab not found: ID does not exist" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.018987 4858 scope.go:117] "RemoveContainer" containerID="4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.019404 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4"} err="failed to get container status \"4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4\": rpc error: code = NotFound desc = could not find container \"4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4\": container with ID starting with 4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4 not found: ID does not exist" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.019424 4858 scope.go:117] "RemoveContainer" containerID="d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.019809 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4"} err="failed to get container status \"d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\": rpc error: code = NotFound desc = could not find container \"d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\": container with ID starting with d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4 not found: ID does not exist" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.019832 4858 scope.go:117] "RemoveContainer" containerID="f99bb1927da4a834c76c5c179c4171aa7eec841da92a7caaec1e4451ae36e321" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.020201 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f99bb1927da4a834c76c5c179c4171aa7eec841da92a7caaec1e4451ae36e321"} err="failed to get container status \"f99bb1927da4a834c76c5c179c4171aa7eec841da92a7caaec1e4451ae36e321\": rpc error: code = NotFound desc = could not find container \"f99bb1927da4a834c76c5c179c4171aa7eec841da92a7caaec1e4451ae36e321\": container with ID starting with f99bb1927da4a834c76c5c179c4171aa7eec841da92a7caaec1e4451ae36e321 not found: ID does not exist" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.020217 4858 scope.go:117] "RemoveContainer" containerID="b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.020507 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6"} err="failed to get container status \"b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6\": rpc error: code = NotFound desc = could not find container \"b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6\": container with ID starting with b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6 not found: ID does not exist" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.020525 4858 scope.go:117] "RemoveContainer" containerID="e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.020783 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad"} err="failed to get container status \"e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad\": rpc error: code = NotFound desc = could not find container \"e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad\": container with ID starting with e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad not found: ID does not exist" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.020799 4858 scope.go:117] "RemoveContainer" containerID="78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.021097 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d"} err="failed to get container status \"78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d\": rpc error: code = NotFound desc = could not find container \"78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d\": container with ID starting with 78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d not found: ID does not exist" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.021116 4858 scope.go:117] "RemoveContainer" containerID="9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.021420 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839"} err="failed to get container status \"9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839\": rpc error: code = NotFound desc = could not find container \"9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839\": container with ID starting with 9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839 not found: ID does not exist" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.021437 4858 scope.go:117] "RemoveContainer" containerID="143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.021711 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99"} err="failed to get container status \"143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99\": rpc error: code = NotFound desc = could not find container \"143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99\": container with ID starting with 143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99 not found: ID does not exist" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.021728 4858 scope.go:117] "RemoveContainer" containerID="19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.022064 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9"} err="failed to get container status \"19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9\": rpc error: code = NotFound desc = could not find container \"19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9\": container with ID starting with 19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9 not found: ID does not exist" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.022084 4858 scope.go:117] "RemoveContainer" containerID="740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.022408 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab"} err="failed to get container status \"740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab\": rpc error: code = NotFound desc = could not find container \"740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab\": container with ID starting with 740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab not found: ID does not exist" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.022430 4858 scope.go:117] "RemoveContainer" containerID="4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.022754 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4"} err="failed to get container status \"4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4\": rpc error: code = NotFound desc = could not find container \"4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4\": container with ID starting with 4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4 not found: ID does not exist" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.022795 4858 scope.go:117] "RemoveContainer" containerID="d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.023193 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4"} err="failed to get container status \"d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\": rpc error: code = NotFound desc = could not find container \"d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\": container with ID starting with d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4 not found: ID does not exist" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.023231 4858 scope.go:117] "RemoveContainer" containerID="f99bb1927da4a834c76c5c179c4171aa7eec841da92a7caaec1e4451ae36e321" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.023526 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f99bb1927da4a834c76c5c179c4171aa7eec841da92a7caaec1e4451ae36e321"} err="failed to get container status \"f99bb1927da4a834c76c5c179c4171aa7eec841da92a7caaec1e4451ae36e321\": rpc error: code = NotFound desc = could not find container \"f99bb1927da4a834c76c5c179c4171aa7eec841da92a7caaec1e4451ae36e321\": container with ID starting with f99bb1927da4a834c76c5c179c4171aa7eec841da92a7caaec1e4451ae36e321 not found: ID does not exist" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.023547 4858 scope.go:117] "RemoveContainer" containerID="b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.023805 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6"} err="failed to get container status \"b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6\": rpc error: code = NotFound desc = could not find container \"b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6\": container with ID starting with b204539f583c37bb565b3768340a994c075f885b429c3929af07a14e7c7356f6 not found: ID does not exist" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.023823 4858 scope.go:117] "RemoveContainer" containerID="e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.024123 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad"} err="failed to get container status \"e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad\": rpc error: code = NotFound desc = could not find container \"e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad\": container with ID starting with e5d8a9b3b9ee9bf0ace3eb6d653974a3803a434e0cb41375cc79c303abaeb3ad not found: ID does not exist" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.024142 4858 scope.go:117] "RemoveContainer" containerID="78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.024418 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d"} err="failed to get container status \"78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d\": rpc error: code = NotFound desc = could not find container \"78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d\": container with ID starting with 78d0ba9c4b87d88894ec50744b2b84689e2d2d4a2ecb4d79a1f04338c878588d not found: ID does not exist" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.024437 4858 scope.go:117] "RemoveContainer" containerID="9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.024758 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839"} err="failed to get container status \"9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839\": rpc error: code = NotFound desc = could not find container \"9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839\": container with ID starting with 9e459ea493a2ed572cbcf5956f113d57f309eba5785442c67ec12b61e05a7839 not found: ID does not exist" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.024774 4858 scope.go:117] "RemoveContainer" containerID="143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.025133 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99"} err="failed to get container status \"143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99\": rpc error: code = NotFound desc = could not find container \"143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99\": container with ID starting with 143ac0f9608b9de0f56d7223c443e53d7afcd5200749b7c40e6224c86b352a99 not found: ID does not exist" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.025153 4858 scope.go:117] "RemoveContainer" containerID="19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.025439 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9"} err="failed to get container status \"19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9\": rpc error: code = NotFound desc = could not find container \"19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9\": container with ID starting with 19fb53bd4e8e67accbf560f258e6bd762e9eafe6d34cb7dd948d5e80ff6b9ac9 not found: ID does not exist" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.025457 4858 scope.go:117] "RemoveContainer" containerID="740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.025792 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab"} err="failed to get container status \"740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab\": rpc error: code = NotFound desc = could not find container \"740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab\": container with ID starting with 740eb115dec2344d0f2599ea1445a2c39643d37f0e3c4a82bb1124cea041d5ab not found: ID does not exist" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.025810 4858 scope.go:117] "RemoveContainer" containerID="4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.026215 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4"} err="failed to get container status \"4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4\": rpc error: code = NotFound desc = could not find container \"4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4\": container with ID starting with 4c557e030b38d822bd89c1436eb413aa532455436a1c0a779222d6bc964ba2e4 not found: ID does not exist" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.026234 4858 scope.go:117] "RemoveContainer" containerID="d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.026533 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4"} err="failed to get container status \"d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\": rpc error: code = NotFound desc = could not find container \"d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4\": container with ID starting with d1ff32e2409704d4fd29126d1d070e430b7ea99a7a9125ea35c687d569cae4a4 not found: ID does not exist" Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.820193 4858 generic.go:334] "Generic (PLEG): container finished" podID="e3476f0e-4b32-4b9d-a487-f3c8e256b7d0" containerID="cef19005f1ac311d2bfd403ebaf523f318ebbd7fd8424a9a1e3f5c74344b1a1e" exitCode=0 Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.820271 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" event={"ID":"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0","Type":"ContainerDied","Data":"cef19005f1ac311d2bfd403ebaf523f318ebbd7fd8424a9a1e3f5c74344b1a1e"} Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.820311 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" event={"ID":"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0","Type":"ContainerStarted","Data":"0d6426287c4e94295acc5b83d498b890725d6b33d8e040403a8fcfa52d1f731e"} Feb 02 17:24:45 crc kubenswrapper[4858]: I0202 17:24:45.822650 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9szlc_4bc7963e-1bdc-4038-805e-bd72fc217a13/kube-multus/2.log" Feb 02 17:24:46 crc kubenswrapper[4858]: I0202 17:24:46.410180 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce405d19-c944-4a11-8195-bca9289b8d73" path="/var/lib/kubelet/pods/ce405d19-c944-4a11-8195-bca9289b8d73/volumes" Feb 02 17:24:46 crc kubenswrapper[4858]: I0202 17:24:46.836231 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" event={"ID":"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0","Type":"ContainerStarted","Data":"b06c105a28ed3c10b6830f1d0a4854676ece80e0e7ad7304ef696a32346b91f4"} Feb 02 17:24:46 crc kubenswrapper[4858]: I0202 17:24:46.836577 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" event={"ID":"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0","Type":"ContainerStarted","Data":"a6eafa5a4228099929f83e68f4cb70d1088e518a2be5ea69d57655a5c03c1157"} Feb 02 17:24:46 crc kubenswrapper[4858]: I0202 17:24:46.836589 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" event={"ID":"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0","Type":"ContainerStarted","Data":"e074a7c75fb82190f7f624fe1f561d6aca3b8d6cbf68124ef94667a312e57843"} Feb 02 17:24:46 crc kubenswrapper[4858]: I0202 17:24:46.836598 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" event={"ID":"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0","Type":"ContainerStarted","Data":"c6daa6cd2a4524166e913cc7f249e6d640bee0db7081f96dc33849911828e064"} Feb 02 17:24:46 crc kubenswrapper[4858]: I0202 17:24:46.836614 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" event={"ID":"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0","Type":"ContainerStarted","Data":"7be95aa210ff0be2d04c8bc2afe26dc3458715c36832a3bf4b28c665c8b7659a"} Feb 02 17:24:46 crc kubenswrapper[4858]: I0202 17:24:46.836624 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" event={"ID":"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0","Type":"ContainerStarted","Data":"ae731e5dbae8294763f8c8895ac120cef7a7787e20a016aba80e422faddfb985"} Feb 02 17:24:49 crc kubenswrapper[4858]: I0202 17:24:49.860247 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" event={"ID":"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0","Type":"ContainerStarted","Data":"4774a55ee87299de952ffb7251ea42954e3e57bc0c653aabeea8bd8226f52f92"} Feb 02 17:24:51 crc kubenswrapper[4858]: I0202 17:24:51.877600 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" event={"ID":"e3476f0e-4b32-4b9d-a487-f3c8e256b7d0","Type":"ContainerStarted","Data":"f5fd7551b7469a4e9936365007951f1fa5a5cce4d4ec37fe5ee3d015461ae10f"} Feb 02 17:24:51 crc kubenswrapper[4858]: I0202 17:24:51.878035 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:51 crc kubenswrapper[4858]: I0202 17:24:51.878052 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:51 crc kubenswrapper[4858]: I0202 17:24:51.910361 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" podStartSLOduration=7.910344072 podStartE2EDuration="7.910344072s" podCreationTimestamp="2026-02-02 17:24:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:24:51.907588926 +0000 UTC m=+593.060004271" watchObservedRunningTime="2026-02-02 17:24:51.910344072 +0000 UTC m=+593.062759327" Feb 02 17:24:51 crc kubenswrapper[4858]: I0202 17:24:51.915881 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:52 crc kubenswrapper[4858]: I0202 17:24:52.886252 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:52 crc kubenswrapper[4858]: I0202 17:24:52.976949 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:24:57 crc kubenswrapper[4858]: I0202 17:24:57.807298 4858 patch_prober.go:28] interesting pod/machine-config-daemon-lbvl2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 17:24:57 crc kubenswrapper[4858]: I0202 17:24:57.807747 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 17:24:58 crc kubenswrapper[4858]: I0202 17:24:58.400770 4858 scope.go:117] "RemoveContainer" containerID="dc80d934773e7a1085767db5e7b28c615ef7491dfabb021de55cba2328bca076" Feb 02 17:24:58 crc kubenswrapper[4858]: E0202 17:24:58.401457 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-9szlc_openshift-multus(4bc7963e-1bdc-4038-805e-bd72fc217a13)\"" pod="openshift-multus/multus-9szlc" podUID="4bc7963e-1bdc-4038-805e-bd72fc217a13" Feb 02 17:25:09 crc kubenswrapper[4858]: I0202 17:25:09.400550 4858 scope.go:117] "RemoveContainer" containerID="dc80d934773e7a1085767db5e7b28c615ef7491dfabb021de55cba2328bca076" Feb 02 17:25:09 crc kubenswrapper[4858]: I0202 17:25:09.997742 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9szlc_4bc7963e-1bdc-4038-805e-bd72fc217a13/kube-multus/2.log" Feb 02 17:25:09 crc kubenswrapper[4858]: I0202 17:25:09.998219 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9szlc" event={"ID":"4bc7963e-1bdc-4038-805e-bd72fc217a13","Type":"ContainerStarted","Data":"d9fff97512a63699ee2a15e90e200d7baef39e617939569760e48989378fe877"} Feb 02 17:25:15 crc kubenswrapper[4858]: I0202 17:25:14.996411 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gj9br" Feb 02 17:25:21 crc kubenswrapper[4858]: I0202 17:25:21.627625 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kmq27"] Feb 02 17:25:21 crc kubenswrapper[4858]: I0202 17:25:21.630297 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kmq27" Feb 02 17:25:21 crc kubenswrapper[4858]: I0202 17:25:21.636104 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kmq27"] Feb 02 17:25:21 crc kubenswrapper[4858]: I0202 17:25:21.638365 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 02 17:25:21 crc kubenswrapper[4858]: I0202 17:25:21.804721 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9c43bf8c-3e4f-4983-a524-7033f240b2f7-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kmq27\" (UID: \"9c43bf8c-3e4f-4983-a524-7033f240b2f7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kmq27" Feb 02 17:25:21 crc kubenswrapper[4858]: I0202 17:25:21.805163 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9c43bf8c-3e4f-4983-a524-7033f240b2f7-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kmq27\" (UID: \"9c43bf8c-3e4f-4983-a524-7033f240b2f7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kmq27" Feb 02 17:25:21 crc kubenswrapper[4858]: I0202 17:25:21.805479 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zf4j\" (UniqueName: \"kubernetes.io/projected/9c43bf8c-3e4f-4983-a524-7033f240b2f7-kube-api-access-6zf4j\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kmq27\" (UID: \"9c43bf8c-3e4f-4983-a524-7033f240b2f7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kmq27" Feb 02 17:25:21 crc kubenswrapper[4858]: I0202 17:25:21.906944 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zf4j\" (UniqueName: \"kubernetes.io/projected/9c43bf8c-3e4f-4983-a524-7033f240b2f7-kube-api-access-6zf4j\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kmq27\" (UID: \"9c43bf8c-3e4f-4983-a524-7033f240b2f7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kmq27" Feb 02 17:25:21 crc kubenswrapper[4858]: I0202 17:25:21.907046 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9c43bf8c-3e4f-4983-a524-7033f240b2f7-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kmq27\" (UID: \"9c43bf8c-3e4f-4983-a524-7033f240b2f7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kmq27" Feb 02 17:25:21 crc kubenswrapper[4858]: I0202 17:25:21.907076 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9c43bf8c-3e4f-4983-a524-7033f240b2f7-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kmq27\" (UID: \"9c43bf8c-3e4f-4983-a524-7033f240b2f7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kmq27" Feb 02 17:25:21 crc kubenswrapper[4858]: I0202 17:25:21.907495 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9c43bf8c-3e4f-4983-a524-7033f240b2f7-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kmq27\" (UID: \"9c43bf8c-3e4f-4983-a524-7033f240b2f7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kmq27" Feb 02 17:25:21 crc kubenswrapper[4858]: I0202 17:25:21.907757 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9c43bf8c-3e4f-4983-a524-7033f240b2f7-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kmq27\" (UID: \"9c43bf8c-3e4f-4983-a524-7033f240b2f7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kmq27" Feb 02 17:25:21 crc kubenswrapper[4858]: I0202 17:25:21.939721 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zf4j\" (UniqueName: \"kubernetes.io/projected/9c43bf8c-3e4f-4983-a524-7033f240b2f7-kube-api-access-6zf4j\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kmq27\" (UID: \"9c43bf8c-3e4f-4983-a524-7033f240b2f7\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kmq27" Feb 02 17:25:21 crc kubenswrapper[4858]: I0202 17:25:21.948799 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kmq27" Feb 02 17:25:22 crc kubenswrapper[4858]: I0202 17:25:22.227052 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kmq27"] Feb 02 17:25:23 crc kubenswrapper[4858]: I0202 17:25:23.098904 4858 generic.go:334] "Generic (PLEG): container finished" podID="9c43bf8c-3e4f-4983-a524-7033f240b2f7" containerID="f72a325074187ec4a2a06c05a53f9e3bee1f4942b96aefa7ec411609ca82735f" exitCode=0 Feb 02 17:25:23 crc kubenswrapper[4858]: I0202 17:25:23.099043 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kmq27" event={"ID":"9c43bf8c-3e4f-4983-a524-7033f240b2f7","Type":"ContainerDied","Data":"f72a325074187ec4a2a06c05a53f9e3bee1f4942b96aefa7ec411609ca82735f"} Feb 02 17:25:23 crc kubenswrapper[4858]: I0202 17:25:23.099465 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kmq27" event={"ID":"9c43bf8c-3e4f-4983-a524-7033f240b2f7","Type":"ContainerStarted","Data":"1edc830e248979296be6e5e5d210a6eb6fe1dddb41c920237c0b0e47975e2b6d"} Feb 02 17:25:25 crc kubenswrapper[4858]: I0202 17:25:25.115638 4858 generic.go:334] "Generic (PLEG): container finished" podID="9c43bf8c-3e4f-4983-a524-7033f240b2f7" containerID="82ec1e55a51d2d33e0ba04b49547c6de674c4b81ae51b4d83c6492ccbfe2f4da" exitCode=0 Feb 02 17:25:25 crc kubenswrapper[4858]: I0202 17:25:25.115691 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kmq27" event={"ID":"9c43bf8c-3e4f-4983-a524-7033f240b2f7","Type":"ContainerDied","Data":"82ec1e55a51d2d33e0ba04b49547c6de674c4b81ae51b4d83c6492ccbfe2f4da"} Feb 02 17:25:26 crc kubenswrapper[4858]: I0202 17:25:26.125601 4858 generic.go:334] "Generic (PLEG): container finished" podID="9c43bf8c-3e4f-4983-a524-7033f240b2f7" containerID="833b6ad54235691fb473e8f1f30177c68fb7777c10ad00cc35f965b9cac1c1e6" exitCode=0 Feb 02 17:25:26 crc kubenswrapper[4858]: I0202 17:25:26.125955 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kmq27" event={"ID":"9c43bf8c-3e4f-4983-a524-7033f240b2f7","Type":"ContainerDied","Data":"833b6ad54235691fb473e8f1f30177c68fb7777c10ad00cc35f965b9cac1c1e6"} Feb 02 17:25:27 crc kubenswrapper[4858]: I0202 17:25:27.474582 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kmq27" Feb 02 17:25:27 crc kubenswrapper[4858]: I0202 17:25:27.675333 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9c43bf8c-3e4f-4983-a524-7033f240b2f7-bundle\") pod \"9c43bf8c-3e4f-4983-a524-7033f240b2f7\" (UID: \"9c43bf8c-3e4f-4983-a524-7033f240b2f7\") " Feb 02 17:25:27 crc kubenswrapper[4858]: I0202 17:25:27.675416 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9c43bf8c-3e4f-4983-a524-7033f240b2f7-util\") pod \"9c43bf8c-3e4f-4983-a524-7033f240b2f7\" (UID: \"9c43bf8c-3e4f-4983-a524-7033f240b2f7\") " Feb 02 17:25:27 crc kubenswrapper[4858]: I0202 17:25:27.675492 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zf4j\" (UniqueName: \"kubernetes.io/projected/9c43bf8c-3e4f-4983-a524-7033f240b2f7-kube-api-access-6zf4j\") pod \"9c43bf8c-3e4f-4983-a524-7033f240b2f7\" (UID: \"9c43bf8c-3e4f-4983-a524-7033f240b2f7\") " Feb 02 17:25:27 crc kubenswrapper[4858]: I0202 17:25:27.676279 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c43bf8c-3e4f-4983-a524-7033f240b2f7-bundle" (OuterVolumeSpecName: "bundle") pod "9c43bf8c-3e4f-4983-a524-7033f240b2f7" (UID: "9c43bf8c-3e4f-4983-a524-7033f240b2f7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:25:27 crc kubenswrapper[4858]: I0202 17:25:27.684750 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c43bf8c-3e4f-4983-a524-7033f240b2f7-kube-api-access-6zf4j" (OuterVolumeSpecName: "kube-api-access-6zf4j") pod "9c43bf8c-3e4f-4983-a524-7033f240b2f7" (UID: "9c43bf8c-3e4f-4983-a524-7033f240b2f7"). InnerVolumeSpecName "kube-api-access-6zf4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:25:27 crc kubenswrapper[4858]: I0202 17:25:27.708932 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c43bf8c-3e4f-4983-a524-7033f240b2f7-util" (OuterVolumeSpecName: "util") pod "9c43bf8c-3e4f-4983-a524-7033f240b2f7" (UID: "9c43bf8c-3e4f-4983-a524-7033f240b2f7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:25:27 crc kubenswrapper[4858]: I0202 17:25:27.776731 4858 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9c43bf8c-3e4f-4983-a524-7033f240b2f7-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:25:27 crc kubenswrapper[4858]: I0202 17:25:27.776797 4858 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9c43bf8c-3e4f-4983-a524-7033f240b2f7-util\") on node \"crc\" DevicePath \"\"" Feb 02 17:25:27 crc kubenswrapper[4858]: I0202 17:25:27.776819 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zf4j\" (UniqueName: \"kubernetes.io/projected/9c43bf8c-3e4f-4983-a524-7033f240b2f7-kube-api-access-6zf4j\") on node \"crc\" DevicePath \"\"" Feb 02 17:25:27 crc kubenswrapper[4858]: I0202 17:25:27.808911 4858 patch_prober.go:28] interesting pod/machine-config-daemon-lbvl2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 17:25:27 crc kubenswrapper[4858]: I0202 17:25:27.809026 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 17:25:27 crc kubenswrapper[4858]: I0202 17:25:27.809139 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" Feb 02 17:25:27 crc kubenswrapper[4858]: I0202 17:25:27.810395 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bb53defa249b6a080019d6db0213995becaf964ff75fe4b36f783c31a6f70e41"} pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 17:25:27 crc kubenswrapper[4858]: I0202 17:25:27.810710 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" containerID="cri-o://bb53defa249b6a080019d6db0213995becaf964ff75fe4b36f783c31a6f70e41" gracePeriod=600 Feb 02 17:25:28 crc kubenswrapper[4858]: I0202 17:25:28.145453 4858 generic.go:334] "Generic (PLEG): container finished" podID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerID="bb53defa249b6a080019d6db0213995becaf964ff75fe4b36f783c31a6f70e41" exitCode=0 Feb 02 17:25:28 crc kubenswrapper[4858]: I0202 17:25:28.145830 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" event={"ID":"d03a4872-ca6a-4233-bdbf-b31f7890dc3e","Type":"ContainerDied","Data":"bb53defa249b6a080019d6db0213995becaf964ff75fe4b36f783c31a6f70e41"} Feb 02 17:25:28 crc kubenswrapper[4858]: I0202 17:25:28.145881 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" event={"ID":"d03a4872-ca6a-4233-bdbf-b31f7890dc3e","Type":"ContainerStarted","Data":"a4515f303cdc3d4371d56381b323ae5d013576ca1083c363dcab5d75f03e2725"} Feb 02 17:25:28 crc kubenswrapper[4858]: I0202 17:25:28.145900 4858 scope.go:117] "RemoveContainer" containerID="53c039250f690ce1254a34f24b2227f388a22d8e62f92b86cf497d453228deae" Feb 02 17:25:28 crc kubenswrapper[4858]: I0202 17:25:28.154561 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kmq27" event={"ID":"9c43bf8c-3e4f-4983-a524-7033f240b2f7","Type":"ContainerDied","Data":"1edc830e248979296be6e5e5d210a6eb6fe1dddb41c920237c0b0e47975e2b6d"} Feb 02 17:25:28 crc kubenswrapper[4858]: I0202 17:25:28.154618 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1edc830e248979296be6e5e5d210a6eb6fe1dddb41c920237c0b0e47975e2b6d" Feb 02 17:25:28 crc kubenswrapper[4858]: I0202 17:25:28.154657 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kmq27" Feb 02 17:25:30 crc kubenswrapper[4858]: I0202 17:25:30.617823 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-pmkq5"] Feb 02 17:25:30 crc kubenswrapper[4858]: E0202 17:25:30.618652 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c43bf8c-3e4f-4983-a524-7033f240b2f7" containerName="util" Feb 02 17:25:30 crc kubenswrapper[4858]: I0202 17:25:30.618668 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c43bf8c-3e4f-4983-a524-7033f240b2f7" containerName="util" Feb 02 17:25:30 crc kubenswrapper[4858]: E0202 17:25:30.618693 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c43bf8c-3e4f-4983-a524-7033f240b2f7" containerName="extract" Feb 02 17:25:30 crc kubenswrapper[4858]: I0202 17:25:30.618702 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c43bf8c-3e4f-4983-a524-7033f240b2f7" containerName="extract" Feb 02 17:25:30 crc kubenswrapper[4858]: E0202 17:25:30.618712 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c43bf8c-3e4f-4983-a524-7033f240b2f7" containerName="pull" Feb 02 17:25:30 crc kubenswrapper[4858]: I0202 17:25:30.618718 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c43bf8c-3e4f-4983-a524-7033f240b2f7" containerName="pull" Feb 02 17:25:30 crc kubenswrapper[4858]: I0202 17:25:30.618846 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c43bf8c-3e4f-4983-a524-7033f240b2f7" containerName="extract" Feb 02 17:25:30 crc kubenswrapper[4858]: I0202 17:25:30.619360 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-pmkq5" Feb 02 17:25:30 crc kubenswrapper[4858]: I0202 17:25:30.621707 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 02 17:25:30 crc kubenswrapper[4858]: I0202 17:25:30.621707 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 02 17:25:30 crc kubenswrapper[4858]: I0202 17:25:30.621718 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-ks56b" Feb 02 17:25:30 crc kubenswrapper[4858]: I0202 17:25:30.631648 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-pmkq5"] Feb 02 17:25:30 crc kubenswrapper[4858]: I0202 17:25:30.817107 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtksh\" (UniqueName: \"kubernetes.io/projected/f3603e7c-ff14-4deb-a9d8-e5751a729be6-kube-api-access-dtksh\") pod \"nmstate-operator-646758c888-pmkq5\" (UID: \"f3603e7c-ff14-4deb-a9d8-e5751a729be6\") " pod="openshift-nmstate/nmstate-operator-646758c888-pmkq5" Feb 02 17:25:30 crc kubenswrapper[4858]: I0202 17:25:30.917928 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtksh\" (UniqueName: \"kubernetes.io/projected/f3603e7c-ff14-4deb-a9d8-e5751a729be6-kube-api-access-dtksh\") pod \"nmstate-operator-646758c888-pmkq5\" (UID: \"f3603e7c-ff14-4deb-a9d8-e5751a729be6\") " pod="openshift-nmstate/nmstate-operator-646758c888-pmkq5" Feb 02 17:25:30 crc kubenswrapper[4858]: I0202 17:25:30.937364 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtksh\" (UniqueName: \"kubernetes.io/projected/f3603e7c-ff14-4deb-a9d8-e5751a729be6-kube-api-access-dtksh\") pod \"nmstate-operator-646758c888-pmkq5\" (UID: \"f3603e7c-ff14-4deb-a9d8-e5751a729be6\") " pod="openshift-nmstate/nmstate-operator-646758c888-pmkq5" Feb 02 17:25:30 crc kubenswrapper[4858]: I0202 17:25:30.940886 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-pmkq5" Feb 02 17:25:31 crc kubenswrapper[4858]: I0202 17:25:31.161372 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-pmkq5"] Feb 02 17:25:31 crc kubenswrapper[4858]: I0202 17:25:31.178432 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-pmkq5" event={"ID":"f3603e7c-ff14-4deb-a9d8-e5751a729be6","Type":"ContainerStarted","Data":"112f960c5a0807480e91807a0058f2b2d98d6c4206a9c74c26020f9cfd538602"} Feb 02 17:25:33 crc kubenswrapper[4858]: I0202 17:25:33.191856 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-pmkq5" event={"ID":"f3603e7c-ff14-4deb-a9d8-e5751a729be6","Type":"ContainerStarted","Data":"0bfa6b8a85be0a3406e6c08e5e7ff82a1ab000ec35813e1ab3d07807a29c0ffc"} Feb 02 17:25:33 crc kubenswrapper[4858]: I0202 17:25:33.210956 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-pmkq5" podStartSLOduration=1.439055919 podStartE2EDuration="3.210934393s" podCreationTimestamp="2026-02-02 17:25:30 +0000 UTC" firstStartedPulling="2026-02-02 17:25:31.172347099 +0000 UTC m=+632.324762354" lastFinishedPulling="2026-02-02 17:25:32.944225543 +0000 UTC m=+634.096640828" observedRunningTime="2026-02-02 17:25:33.205895193 +0000 UTC m=+634.358310458" watchObservedRunningTime="2026-02-02 17:25:33.210934393 +0000 UTC m=+634.363349658" Feb 02 17:25:41 crc kubenswrapper[4858]: I0202 17:25:41.799535 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-8fldl"] Feb 02 17:25:41 crc kubenswrapper[4858]: I0202 17:25:41.802094 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-8fldl" Feb 02 17:25:41 crc kubenswrapper[4858]: I0202 17:25:41.806962 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-wj5s2" Feb 02 17:25:41 crc kubenswrapper[4858]: I0202 17:25:41.827324 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-t9gvb"] Feb 02 17:25:41 crc kubenswrapper[4858]: I0202 17:25:41.829107 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t9gvb" Feb 02 17:25:41 crc kubenswrapper[4858]: I0202 17:25:41.835856 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 02 17:25:41 crc kubenswrapper[4858]: I0202 17:25:41.845139 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-8fldl"] Feb 02 17:25:41 crc kubenswrapper[4858]: I0202 17:25:41.875028 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/11353839-2688-4112-a9d9-87bead34c26a-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-t9gvb\" (UID: \"11353839-2688-4112-a9d9-87bead34c26a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t9gvb" Feb 02 17:25:41 crc kubenswrapper[4858]: I0202 17:25:41.875085 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc8ws\" (UniqueName: \"kubernetes.io/projected/11353839-2688-4112-a9d9-87bead34c26a-kube-api-access-wc8ws\") pod \"nmstate-webhook-8474b5b9d8-t9gvb\" (UID: \"11353839-2688-4112-a9d9-87bead34c26a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t9gvb" Feb 02 17:25:41 crc kubenswrapper[4858]: I0202 17:25:41.877049 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-t9gvb"] Feb 02 17:25:41 crc kubenswrapper[4858]: I0202 17:25:41.896894 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-9rsf6"] Feb 02 17:25:41 crc kubenswrapper[4858]: I0202 17:25:41.897816 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-9rsf6" Feb 02 17:25:41 crc kubenswrapper[4858]: I0202 17:25:41.959794 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-f8nz8"] Feb 02 17:25:41 crc kubenswrapper[4858]: I0202 17:25:41.960743 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-f8nz8" Feb 02 17:25:41 crc kubenswrapper[4858]: I0202 17:25:41.962629 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 02 17:25:41 crc kubenswrapper[4858]: I0202 17:25:41.962875 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-6stn5" Feb 02 17:25:41 crc kubenswrapper[4858]: I0202 17:25:41.963364 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 02 17:25:41 crc kubenswrapper[4858]: I0202 17:25:41.969297 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-f8nz8"] Feb 02 17:25:41 crc kubenswrapper[4858]: I0202 17:25:41.976121 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sspr6\" (UniqueName: \"kubernetes.io/projected/cdee88da-b22d-4fe4-98a2-a53cadedb993-kube-api-access-sspr6\") pod \"nmstate-metrics-54757c584b-8fldl\" (UID: \"cdee88da-b22d-4fe4-98a2-a53cadedb993\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-8fldl" Feb 02 17:25:41 crc kubenswrapper[4858]: I0202 17:25:41.976186 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/11353839-2688-4112-a9d9-87bead34c26a-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-t9gvb\" (UID: \"11353839-2688-4112-a9d9-87bead34c26a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t9gvb" Feb 02 17:25:41 crc kubenswrapper[4858]: E0202 17:25:41.976294 4858 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 02 17:25:41 crc kubenswrapper[4858]: E0202 17:25:41.976347 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11353839-2688-4112-a9d9-87bead34c26a-tls-key-pair podName:11353839-2688-4112-a9d9-87bead34c26a nodeName:}" failed. No retries permitted until 2026-02-02 17:25:42.476326689 +0000 UTC m=+643.628741954 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/11353839-2688-4112-a9d9-87bead34c26a-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-t9gvb" (UID: "11353839-2688-4112-a9d9-87bead34c26a") : secret "openshift-nmstate-webhook" not found Feb 02 17:25:41 crc kubenswrapper[4858]: I0202 17:25:41.976436 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch8qm\" (UniqueName: \"kubernetes.io/projected/a69645ff-c03c-4296-aa6a-63cd14095040-kube-api-access-ch8qm\") pod \"nmstate-console-plugin-7754f76f8b-f8nz8\" (UID: \"a69645ff-c03c-4296-aa6a-63cd14095040\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-f8nz8" Feb 02 17:25:41 crc kubenswrapper[4858]: I0202 17:25:41.976473 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a69645ff-c03c-4296-aa6a-63cd14095040-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-f8nz8\" (UID: \"a69645ff-c03c-4296-aa6a-63cd14095040\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-f8nz8" Feb 02 17:25:41 crc kubenswrapper[4858]: I0202 17:25:41.976498 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc8ws\" (UniqueName: \"kubernetes.io/projected/11353839-2688-4112-a9d9-87bead34c26a-kube-api-access-wc8ws\") pod \"nmstate-webhook-8474b5b9d8-t9gvb\" (UID: \"11353839-2688-4112-a9d9-87bead34c26a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t9gvb" Feb 02 17:25:41 crc kubenswrapper[4858]: I0202 17:25:41.976527 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a69645ff-c03c-4296-aa6a-63cd14095040-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-f8nz8\" (UID: \"a69645ff-c03c-4296-aa6a-63cd14095040\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-f8nz8" Feb 02 17:25:41 crc kubenswrapper[4858]: I0202 17:25:41.994493 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc8ws\" (UniqueName: \"kubernetes.io/projected/11353839-2688-4112-a9d9-87bead34c26a-kube-api-access-wc8ws\") pod \"nmstate-webhook-8474b5b9d8-t9gvb\" (UID: \"11353839-2688-4112-a9d9-87bead34c26a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t9gvb" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.078096 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/35bb0d37-e388-42c3-ad03-2cbb0e4a9409-ovs-socket\") pod \"nmstate-handler-9rsf6\" (UID: \"35bb0d37-e388-42c3-ad03-2cbb0e4a9409\") " pod="openshift-nmstate/nmstate-handler-9rsf6" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.078169 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sspr6\" (UniqueName: \"kubernetes.io/projected/cdee88da-b22d-4fe4-98a2-a53cadedb993-kube-api-access-sspr6\") pod \"nmstate-metrics-54757c584b-8fldl\" (UID: \"cdee88da-b22d-4fe4-98a2-a53cadedb993\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-8fldl" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.078243 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch8qm\" (UniqueName: \"kubernetes.io/projected/a69645ff-c03c-4296-aa6a-63cd14095040-kube-api-access-ch8qm\") pod \"nmstate-console-plugin-7754f76f8b-f8nz8\" (UID: \"a69645ff-c03c-4296-aa6a-63cd14095040\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-f8nz8" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.078273 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a69645ff-c03c-4296-aa6a-63cd14095040-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-f8nz8\" (UID: \"a69645ff-c03c-4296-aa6a-63cd14095040\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-f8nz8" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.078302 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/35bb0d37-e388-42c3-ad03-2cbb0e4a9409-dbus-socket\") pod \"nmstate-handler-9rsf6\" (UID: \"35bb0d37-e388-42c3-ad03-2cbb0e4a9409\") " pod="openshift-nmstate/nmstate-handler-9rsf6" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.078327 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/35bb0d37-e388-42c3-ad03-2cbb0e4a9409-nmstate-lock\") pod \"nmstate-handler-9rsf6\" (UID: \"35bb0d37-e388-42c3-ad03-2cbb0e4a9409\") " pod="openshift-nmstate/nmstate-handler-9rsf6" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.078359 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a69645ff-c03c-4296-aa6a-63cd14095040-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-f8nz8\" (UID: \"a69645ff-c03c-4296-aa6a-63cd14095040\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-f8nz8" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.078397 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcbzn\" (UniqueName: \"kubernetes.io/projected/35bb0d37-e388-42c3-ad03-2cbb0e4a9409-kube-api-access-qcbzn\") pod \"nmstate-handler-9rsf6\" (UID: \"35bb0d37-e388-42c3-ad03-2cbb0e4a9409\") " pod="openshift-nmstate/nmstate-handler-9rsf6" Feb 02 17:25:42 crc kubenswrapper[4858]: E0202 17:25:42.078557 4858 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 02 17:25:42 crc kubenswrapper[4858]: E0202 17:25:42.078608 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a69645ff-c03c-4296-aa6a-63cd14095040-plugin-serving-cert podName:a69645ff-c03c-4296-aa6a-63cd14095040 nodeName:}" failed. No retries permitted until 2026-02-02 17:25:42.578590444 +0000 UTC m=+643.731005709 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/a69645ff-c03c-4296-aa6a-63cd14095040-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-f8nz8" (UID: "a69645ff-c03c-4296-aa6a-63cd14095040") : secret "plugin-serving-cert" not found Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.079380 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a69645ff-c03c-4296-aa6a-63cd14095040-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-f8nz8\" (UID: \"a69645ff-c03c-4296-aa6a-63cd14095040\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-f8nz8" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.106690 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch8qm\" (UniqueName: \"kubernetes.io/projected/a69645ff-c03c-4296-aa6a-63cd14095040-kube-api-access-ch8qm\") pod \"nmstate-console-plugin-7754f76f8b-f8nz8\" (UID: \"a69645ff-c03c-4296-aa6a-63cd14095040\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-f8nz8" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.113234 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sspr6\" (UniqueName: \"kubernetes.io/projected/cdee88da-b22d-4fe4-98a2-a53cadedb993-kube-api-access-sspr6\") pod \"nmstate-metrics-54757c584b-8fldl\" (UID: \"cdee88da-b22d-4fe4-98a2-a53cadedb993\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-8fldl" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.153540 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5cfbb7769c-7h278"] Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.154429 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5cfbb7769c-7h278" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.158791 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-8fldl" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.164046 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5cfbb7769c-7h278"] Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.179661 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcbzn\" (UniqueName: \"kubernetes.io/projected/35bb0d37-e388-42c3-ad03-2cbb0e4a9409-kube-api-access-qcbzn\") pod \"nmstate-handler-9rsf6\" (UID: \"35bb0d37-e388-42c3-ad03-2cbb0e4a9409\") " pod="openshift-nmstate/nmstate-handler-9rsf6" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.179728 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/35bb0d37-e388-42c3-ad03-2cbb0e4a9409-ovs-socket\") pod \"nmstate-handler-9rsf6\" (UID: \"35bb0d37-e388-42c3-ad03-2cbb0e4a9409\") " pod="openshift-nmstate/nmstate-handler-9rsf6" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.179776 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/35bb0d37-e388-42c3-ad03-2cbb0e4a9409-ovs-socket\") pod \"nmstate-handler-9rsf6\" (UID: \"35bb0d37-e388-42c3-ad03-2cbb0e4a9409\") " pod="openshift-nmstate/nmstate-handler-9rsf6" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.179840 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/35bb0d37-e388-42c3-ad03-2cbb0e4a9409-dbus-socket\") pod \"nmstate-handler-9rsf6\" (UID: \"35bb0d37-e388-42c3-ad03-2cbb0e4a9409\") " pod="openshift-nmstate/nmstate-handler-9rsf6" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.179859 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/35bb0d37-e388-42c3-ad03-2cbb0e4a9409-nmstate-lock\") pod \"nmstate-handler-9rsf6\" (UID: \"35bb0d37-e388-42c3-ad03-2cbb0e4a9409\") " pod="openshift-nmstate/nmstate-handler-9rsf6" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.179903 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/35bb0d37-e388-42c3-ad03-2cbb0e4a9409-nmstate-lock\") pod \"nmstate-handler-9rsf6\" (UID: \"35bb0d37-e388-42c3-ad03-2cbb0e4a9409\") " pod="openshift-nmstate/nmstate-handler-9rsf6" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.180141 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/35bb0d37-e388-42c3-ad03-2cbb0e4a9409-dbus-socket\") pod \"nmstate-handler-9rsf6\" (UID: \"35bb0d37-e388-42c3-ad03-2cbb0e4a9409\") " pod="openshift-nmstate/nmstate-handler-9rsf6" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.204105 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcbzn\" (UniqueName: \"kubernetes.io/projected/35bb0d37-e388-42c3-ad03-2cbb0e4a9409-kube-api-access-qcbzn\") pod \"nmstate-handler-9rsf6\" (UID: \"35bb0d37-e388-42c3-ad03-2cbb0e4a9409\") " pod="openshift-nmstate/nmstate-handler-9rsf6" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.212269 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-9rsf6" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.263454 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-9rsf6" event={"ID":"35bb0d37-e388-42c3-ad03-2cbb0e4a9409","Type":"ContainerStarted","Data":"ca31270a5cd869552542e61e8672b2368a5a2dc12933e3ec706318abe3108eb7"} Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.280785 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1818e0a-051a-4a45-bab9-0ba49428a41b-trusted-ca-bundle\") pod \"console-5cfbb7769c-7h278\" (UID: \"a1818e0a-051a-4a45-bab9-0ba49428a41b\") " pod="openshift-console/console-5cfbb7769c-7h278" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.280824 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl2kr\" (UniqueName: \"kubernetes.io/projected/a1818e0a-051a-4a45-bab9-0ba49428a41b-kube-api-access-dl2kr\") pod \"console-5cfbb7769c-7h278\" (UID: \"a1818e0a-051a-4a45-bab9-0ba49428a41b\") " pod="openshift-console/console-5cfbb7769c-7h278" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.280876 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a1818e0a-051a-4a45-bab9-0ba49428a41b-console-oauth-config\") pod \"console-5cfbb7769c-7h278\" (UID: \"a1818e0a-051a-4a45-bab9-0ba49428a41b\") " pod="openshift-console/console-5cfbb7769c-7h278" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.280892 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a1818e0a-051a-4a45-bab9-0ba49428a41b-oauth-serving-cert\") pod \"console-5cfbb7769c-7h278\" (UID: \"a1818e0a-051a-4a45-bab9-0ba49428a41b\") " pod="openshift-console/console-5cfbb7769c-7h278" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.280919 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a1818e0a-051a-4a45-bab9-0ba49428a41b-service-ca\") pod \"console-5cfbb7769c-7h278\" (UID: \"a1818e0a-051a-4a45-bab9-0ba49428a41b\") " pod="openshift-console/console-5cfbb7769c-7h278" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.281001 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a1818e0a-051a-4a45-bab9-0ba49428a41b-console-config\") pod \"console-5cfbb7769c-7h278\" (UID: \"a1818e0a-051a-4a45-bab9-0ba49428a41b\") " pod="openshift-console/console-5cfbb7769c-7h278" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.281057 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a1818e0a-051a-4a45-bab9-0ba49428a41b-console-serving-cert\") pod \"console-5cfbb7769c-7h278\" (UID: \"a1818e0a-051a-4a45-bab9-0ba49428a41b\") " pod="openshift-console/console-5cfbb7769c-7h278" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.381622 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a1818e0a-051a-4a45-bab9-0ba49428a41b-console-oauth-config\") pod \"console-5cfbb7769c-7h278\" (UID: \"a1818e0a-051a-4a45-bab9-0ba49428a41b\") " pod="openshift-console/console-5cfbb7769c-7h278" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.381664 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a1818e0a-051a-4a45-bab9-0ba49428a41b-oauth-serving-cert\") pod \"console-5cfbb7769c-7h278\" (UID: \"a1818e0a-051a-4a45-bab9-0ba49428a41b\") " pod="openshift-console/console-5cfbb7769c-7h278" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.381695 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a1818e0a-051a-4a45-bab9-0ba49428a41b-service-ca\") pod \"console-5cfbb7769c-7h278\" (UID: \"a1818e0a-051a-4a45-bab9-0ba49428a41b\") " pod="openshift-console/console-5cfbb7769c-7h278" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.381719 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a1818e0a-051a-4a45-bab9-0ba49428a41b-console-config\") pod \"console-5cfbb7769c-7h278\" (UID: \"a1818e0a-051a-4a45-bab9-0ba49428a41b\") " pod="openshift-console/console-5cfbb7769c-7h278" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.381755 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a1818e0a-051a-4a45-bab9-0ba49428a41b-console-serving-cert\") pod \"console-5cfbb7769c-7h278\" (UID: \"a1818e0a-051a-4a45-bab9-0ba49428a41b\") " pod="openshift-console/console-5cfbb7769c-7h278" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.381826 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1818e0a-051a-4a45-bab9-0ba49428a41b-trusted-ca-bundle\") pod \"console-5cfbb7769c-7h278\" (UID: \"a1818e0a-051a-4a45-bab9-0ba49428a41b\") " pod="openshift-console/console-5cfbb7769c-7h278" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.381848 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dl2kr\" (UniqueName: \"kubernetes.io/projected/a1818e0a-051a-4a45-bab9-0ba49428a41b-kube-api-access-dl2kr\") pod \"console-5cfbb7769c-7h278\" (UID: \"a1818e0a-051a-4a45-bab9-0ba49428a41b\") " pod="openshift-console/console-5cfbb7769c-7h278" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.382799 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a1818e0a-051a-4a45-bab9-0ba49428a41b-console-config\") pod \"console-5cfbb7769c-7h278\" (UID: \"a1818e0a-051a-4a45-bab9-0ba49428a41b\") " pod="openshift-console/console-5cfbb7769c-7h278" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.383068 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1818e0a-051a-4a45-bab9-0ba49428a41b-trusted-ca-bundle\") pod \"console-5cfbb7769c-7h278\" (UID: \"a1818e0a-051a-4a45-bab9-0ba49428a41b\") " pod="openshift-console/console-5cfbb7769c-7h278" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.383459 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a1818e0a-051a-4a45-bab9-0ba49428a41b-service-ca\") pod \"console-5cfbb7769c-7h278\" (UID: \"a1818e0a-051a-4a45-bab9-0ba49428a41b\") " pod="openshift-console/console-5cfbb7769c-7h278" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.383456 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a1818e0a-051a-4a45-bab9-0ba49428a41b-oauth-serving-cert\") pod \"console-5cfbb7769c-7h278\" (UID: \"a1818e0a-051a-4a45-bab9-0ba49428a41b\") " pod="openshift-console/console-5cfbb7769c-7h278" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.386233 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a1818e0a-051a-4a45-bab9-0ba49428a41b-console-oauth-config\") pod \"console-5cfbb7769c-7h278\" (UID: \"a1818e0a-051a-4a45-bab9-0ba49428a41b\") " pod="openshift-console/console-5cfbb7769c-7h278" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.388516 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a1818e0a-051a-4a45-bab9-0ba49428a41b-console-serving-cert\") pod \"console-5cfbb7769c-7h278\" (UID: \"a1818e0a-051a-4a45-bab9-0ba49428a41b\") " pod="openshift-console/console-5cfbb7769c-7h278" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.399743 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dl2kr\" (UniqueName: \"kubernetes.io/projected/a1818e0a-051a-4a45-bab9-0ba49428a41b-kube-api-access-dl2kr\") pod \"console-5cfbb7769c-7h278\" (UID: \"a1818e0a-051a-4a45-bab9-0ba49428a41b\") " pod="openshift-console/console-5cfbb7769c-7h278" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.482904 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/11353839-2688-4112-a9d9-87bead34c26a-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-t9gvb\" (UID: \"11353839-2688-4112-a9d9-87bead34c26a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t9gvb" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.486913 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/11353839-2688-4112-a9d9-87bead34c26a-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-t9gvb\" (UID: \"11353839-2688-4112-a9d9-87bead34c26a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t9gvb" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.523701 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5cfbb7769c-7h278" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.558160 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-8fldl"] Feb 02 17:25:42 crc kubenswrapper[4858]: W0202 17:25:42.560381 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcdee88da_b22d_4fe4_98a2_a53cadedb993.slice/crio-065b5d16576921c13183bc1e74384bbe265067c1350c75f4bbd08df05044f541 WatchSource:0}: Error finding container 065b5d16576921c13183bc1e74384bbe265067c1350c75f4bbd08df05044f541: Status 404 returned error can't find the container with id 065b5d16576921c13183bc1e74384bbe265067c1350c75f4bbd08df05044f541 Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.584210 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a69645ff-c03c-4296-aa6a-63cd14095040-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-f8nz8\" (UID: \"a69645ff-c03c-4296-aa6a-63cd14095040\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-f8nz8" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.589867 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a69645ff-c03c-4296-aa6a-63cd14095040-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-f8nz8\" (UID: \"a69645ff-c03c-4296-aa6a-63cd14095040\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-f8nz8" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.721716 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5cfbb7769c-7h278"] Feb 02 17:25:42 crc kubenswrapper[4858]: W0202 17:25:42.728614 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1818e0a_051a_4a45_bab9_0ba49428a41b.slice/crio-1071b50668d4a2bbadef535fe81bcccca3a7e63c92f12cda2e48091bb80ab3ec WatchSource:0}: Error finding container 1071b50668d4a2bbadef535fe81bcccca3a7e63c92f12cda2e48091bb80ab3ec: Status 404 returned error can't find the container with id 1071b50668d4a2bbadef535fe81bcccca3a7e63c92f12cda2e48091bb80ab3ec Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.784896 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t9gvb" Feb 02 17:25:42 crc kubenswrapper[4858]: I0202 17:25:42.876827 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-f8nz8" Feb 02 17:25:43 crc kubenswrapper[4858]: I0202 17:25:43.004460 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-t9gvb"] Feb 02 17:25:43 crc kubenswrapper[4858]: W0202 17:25:43.019705 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11353839_2688_4112_a9d9_87bead34c26a.slice/crio-66177220beb7797a9dc6c5047898f422f0c74fa7526e67d6900edfc415e71291 WatchSource:0}: Error finding container 66177220beb7797a9dc6c5047898f422f0c74fa7526e67d6900edfc415e71291: Status 404 returned error can't find the container with id 66177220beb7797a9dc6c5047898f422f0c74fa7526e67d6900edfc415e71291 Feb 02 17:25:43 crc kubenswrapper[4858]: I0202 17:25:43.122499 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-f8nz8"] Feb 02 17:25:43 crc kubenswrapper[4858]: W0202 17:25:43.125087 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda69645ff_c03c_4296_aa6a_63cd14095040.slice/crio-abec31c00aa474dffd63c5a4fd9f7d5c71bcafdbdd174a9215c5322aee571cb2 WatchSource:0}: Error finding container abec31c00aa474dffd63c5a4fd9f7d5c71bcafdbdd174a9215c5322aee571cb2: Status 404 returned error can't find the container with id abec31c00aa474dffd63c5a4fd9f7d5c71bcafdbdd174a9215c5322aee571cb2 Feb 02 17:25:43 crc kubenswrapper[4858]: I0202 17:25:43.269053 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-f8nz8" event={"ID":"a69645ff-c03c-4296-aa6a-63cd14095040","Type":"ContainerStarted","Data":"abec31c00aa474dffd63c5a4fd9f7d5c71bcafdbdd174a9215c5322aee571cb2"} Feb 02 17:25:43 crc kubenswrapper[4858]: I0202 17:25:43.270101 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t9gvb" event={"ID":"11353839-2688-4112-a9d9-87bead34c26a","Type":"ContainerStarted","Data":"66177220beb7797a9dc6c5047898f422f0c74fa7526e67d6900edfc415e71291"} Feb 02 17:25:43 crc kubenswrapper[4858]: I0202 17:25:43.270768 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-8fldl" event={"ID":"cdee88da-b22d-4fe4-98a2-a53cadedb993","Type":"ContainerStarted","Data":"065b5d16576921c13183bc1e74384bbe265067c1350c75f4bbd08df05044f541"} Feb 02 17:25:43 crc kubenswrapper[4858]: I0202 17:25:43.272109 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5cfbb7769c-7h278" event={"ID":"a1818e0a-051a-4a45-bab9-0ba49428a41b","Type":"ContainerStarted","Data":"ff819ba987afe1e7d57d13502c9dd6c5a674ad0d441647b326191eebefdc586d"} Feb 02 17:25:43 crc kubenswrapper[4858]: I0202 17:25:43.272158 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5cfbb7769c-7h278" event={"ID":"a1818e0a-051a-4a45-bab9-0ba49428a41b","Type":"ContainerStarted","Data":"1071b50668d4a2bbadef535fe81bcccca3a7e63c92f12cda2e48091bb80ab3ec"} Feb 02 17:25:43 crc kubenswrapper[4858]: I0202 17:25:43.291674 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5cfbb7769c-7h278" podStartSLOduration=1.291658016 podStartE2EDuration="1.291658016s" podCreationTimestamp="2026-02-02 17:25:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:25:43.28563488 +0000 UTC m=+644.438050165" watchObservedRunningTime="2026-02-02 17:25:43.291658016 +0000 UTC m=+644.444073281" Feb 02 17:25:45 crc kubenswrapper[4858]: I0202 17:25:45.289583 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-9rsf6" event={"ID":"35bb0d37-e388-42c3-ad03-2cbb0e4a9409","Type":"ContainerStarted","Data":"7943bbc61c6e8a1c0a7efee35f5e1b858762c16236b9fa509b06b33e7d9eaff8"} Feb 02 17:25:45 crc kubenswrapper[4858]: I0202 17:25:45.290363 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-9rsf6" Feb 02 17:25:45 crc kubenswrapper[4858]: I0202 17:25:45.291786 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t9gvb" event={"ID":"11353839-2688-4112-a9d9-87bead34c26a","Type":"ContainerStarted","Data":"6d31f8da311bbb0f0210877d70d91328783b2a0da0be344143e6954a75f93bff"} Feb 02 17:25:45 crc kubenswrapper[4858]: I0202 17:25:45.291871 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t9gvb" Feb 02 17:25:45 crc kubenswrapper[4858]: I0202 17:25:45.293398 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-8fldl" event={"ID":"cdee88da-b22d-4fe4-98a2-a53cadedb993","Type":"ContainerStarted","Data":"3aa42bfe2e11b64a73f9c0a2ea03cb4a6510413a36b926133bfb3b8893cd052f"} Feb 02 17:25:45 crc kubenswrapper[4858]: I0202 17:25:45.326683 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-9rsf6" podStartSLOduration=2.231133753 podStartE2EDuration="4.326658019s" podCreationTimestamp="2026-02-02 17:25:41 +0000 UTC" firstStartedPulling="2026-02-02 17:25:42.228891648 +0000 UTC m=+643.381306913" lastFinishedPulling="2026-02-02 17:25:44.324415914 +0000 UTC m=+645.476831179" observedRunningTime="2026-02-02 17:25:45.306100931 +0000 UTC m=+646.458516206" watchObservedRunningTime="2026-02-02 17:25:45.326658019 +0000 UTC m=+646.479073284" Feb 02 17:25:45 crc kubenswrapper[4858]: I0202 17:25:45.331332 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t9gvb" podStartSLOduration=3.030364009 podStartE2EDuration="4.331289797s" podCreationTimestamp="2026-02-02 17:25:41 +0000 UTC" firstStartedPulling="2026-02-02 17:25:43.024812302 +0000 UTC m=+644.177227567" lastFinishedPulling="2026-02-02 17:25:44.32573809 +0000 UTC m=+645.478153355" observedRunningTime="2026-02-02 17:25:45.323548603 +0000 UTC m=+646.475963858" watchObservedRunningTime="2026-02-02 17:25:45.331289797 +0000 UTC m=+646.483705062" Feb 02 17:25:46 crc kubenswrapper[4858]: I0202 17:25:46.301545 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-f8nz8" event={"ID":"a69645ff-c03c-4296-aa6a-63cd14095040","Type":"ContainerStarted","Data":"40279ff88c854def9427169a010dfff32590d9c6ab6e70b0de06731cec7c6df5"} Feb 02 17:25:46 crc kubenswrapper[4858]: I0202 17:25:46.324244 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-f8nz8" podStartSLOduration=3.120145039 podStartE2EDuration="5.324210625s" podCreationTimestamp="2026-02-02 17:25:41 +0000 UTC" firstStartedPulling="2026-02-02 17:25:43.127490629 +0000 UTC m=+644.279905894" lastFinishedPulling="2026-02-02 17:25:45.331556225 +0000 UTC m=+646.483971480" observedRunningTime="2026-02-02 17:25:46.320533874 +0000 UTC m=+647.472949179" watchObservedRunningTime="2026-02-02 17:25:46.324210625 +0000 UTC m=+647.476625890" Feb 02 17:25:47 crc kubenswrapper[4858]: I0202 17:25:47.311937 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-8fldl" event={"ID":"cdee88da-b22d-4fe4-98a2-a53cadedb993","Type":"ContainerStarted","Data":"84c9f7cd97d7bde443438a3ae17bc63d2349cab6e9d6b89eb121024a92911fe5"} Feb 02 17:25:47 crc kubenswrapper[4858]: I0202 17:25:47.338796 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-8fldl" podStartSLOduration=2.181978644 podStartE2EDuration="6.338773021s" podCreationTimestamp="2026-02-02 17:25:41 +0000 UTC" firstStartedPulling="2026-02-02 17:25:42.56227535 +0000 UTC m=+643.714690615" lastFinishedPulling="2026-02-02 17:25:46.719069727 +0000 UTC m=+647.871484992" observedRunningTime="2026-02-02 17:25:47.333565757 +0000 UTC m=+648.485981062" watchObservedRunningTime="2026-02-02 17:25:47.338773021 +0000 UTC m=+648.491188296" Feb 02 17:25:52 crc kubenswrapper[4858]: I0202 17:25:52.243777 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-9rsf6" Feb 02 17:25:52 crc kubenswrapper[4858]: I0202 17:25:52.523943 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5cfbb7769c-7h278" Feb 02 17:25:52 crc kubenswrapper[4858]: I0202 17:25:52.524135 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5cfbb7769c-7h278" Feb 02 17:25:52 crc kubenswrapper[4858]: I0202 17:25:52.532736 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5cfbb7769c-7h278" Feb 02 17:25:53 crc kubenswrapper[4858]: I0202 17:25:53.358681 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5cfbb7769c-7h278" Feb 02 17:25:53 crc kubenswrapper[4858]: I0202 17:25:53.417603 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-zww4k"] Feb 02 17:26:00 crc kubenswrapper[4858]: I0202 17:26:00.725062 4858 scope.go:117] "RemoveContainer" containerID="9e5faa8ff18d17e744c69079f73c90e02264905a93efd6f32f18a56aef774107" Feb 02 17:26:02 crc kubenswrapper[4858]: I0202 17:26:02.794248 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t9gvb" Feb 02 17:26:16 crc kubenswrapper[4858]: I0202 17:26:16.215924 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmhk8m"] Feb 02 17:26:16 crc kubenswrapper[4858]: I0202 17:26:16.217715 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmhk8m" Feb 02 17:26:16 crc kubenswrapper[4858]: I0202 17:26:16.220470 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 02 17:26:16 crc kubenswrapper[4858]: I0202 17:26:16.235492 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmhk8m"] Feb 02 17:26:16 crc kubenswrapper[4858]: I0202 17:26:16.282068 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tcsd\" (UniqueName: \"kubernetes.io/projected/10d6ebd8-c224-43f2-b27c-bb5944ad819d-kube-api-access-6tcsd\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmhk8m\" (UID: \"10d6ebd8-c224-43f2-b27c-bb5944ad819d\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmhk8m" Feb 02 17:26:16 crc kubenswrapper[4858]: I0202 17:26:16.282135 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/10d6ebd8-c224-43f2-b27c-bb5944ad819d-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmhk8m\" (UID: \"10d6ebd8-c224-43f2-b27c-bb5944ad819d\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmhk8m" Feb 02 17:26:16 crc kubenswrapper[4858]: I0202 17:26:16.282206 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/10d6ebd8-c224-43f2-b27c-bb5944ad819d-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmhk8m\" (UID: \"10d6ebd8-c224-43f2-b27c-bb5944ad819d\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmhk8m" Feb 02 17:26:16 crc kubenswrapper[4858]: I0202 17:26:16.382853 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/10d6ebd8-c224-43f2-b27c-bb5944ad819d-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmhk8m\" (UID: \"10d6ebd8-c224-43f2-b27c-bb5944ad819d\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmhk8m" Feb 02 17:26:16 crc kubenswrapper[4858]: I0202 17:26:16.382923 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/10d6ebd8-c224-43f2-b27c-bb5944ad819d-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmhk8m\" (UID: \"10d6ebd8-c224-43f2-b27c-bb5944ad819d\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmhk8m" Feb 02 17:26:16 crc kubenswrapper[4858]: I0202 17:26:16.382997 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tcsd\" (UniqueName: \"kubernetes.io/projected/10d6ebd8-c224-43f2-b27c-bb5944ad819d-kube-api-access-6tcsd\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmhk8m\" (UID: \"10d6ebd8-c224-43f2-b27c-bb5944ad819d\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmhk8m" Feb 02 17:26:16 crc kubenswrapper[4858]: I0202 17:26:16.383647 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/10d6ebd8-c224-43f2-b27c-bb5944ad819d-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmhk8m\" (UID: \"10d6ebd8-c224-43f2-b27c-bb5944ad819d\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmhk8m" Feb 02 17:26:16 crc kubenswrapper[4858]: I0202 17:26:16.383673 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/10d6ebd8-c224-43f2-b27c-bb5944ad819d-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmhk8m\" (UID: \"10d6ebd8-c224-43f2-b27c-bb5944ad819d\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmhk8m" Feb 02 17:26:16 crc kubenswrapper[4858]: I0202 17:26:16.413926 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tcsd\" (UniqueName: \"kubernetes.io/projected/10d6ebd8-c224-43f2-b27c-bb5944ad819d-kube-api-access-6tcsd\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmhk8m\" (UID: \"10d6ebd8-c224-43f2-b27c-bb5944ad819d\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmhk8m" Feb 02 17:26:16 crc kubenswrapper[4858]: I0202 17:26:16.592801 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmhk8m" Feb 02 17:26:17 crc kubenswrapper[4858]: I0202 17:26:17.076545 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmhk8m"] Feb 02 17:26:17 crc kubenswrapper[4858]: I0202 17:26:17.536620 4858 generic.go:334] "Generic (PLEG): container finished" podID="10d6ebd8-c224-43f2-b27c-bb5944ad819d" containerID="ec56b59d6e2ac76c41942e703a65e1ecf91663c0bc92a0e1e49ffd7a534e99ba" exitCode=0 Feb 02 17:26:17 crc kubenswrapper[4858]: I0202 17:26:17.536680 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmhk8m" event={"ID":"10d6ebd8-c224-43f2-b27c-bb5944ad819d","Type":"ContainerDied","Data":"ec56b59d6e2ac76c41942e703a65e1ecf91663c0bc92a0e1e49ffd7a534e99ba"} Feb 02 17:26:17 crc kubenswrapper[4858]: I0202 17:26:17.538355 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmhk8m" event={"ID":"10d6ebd8-c224-43f2-b27c-bb5944ad819d","Type":"ContainerStarted","Data":"602349005b90434a65c522a06ecc481a788c8b4d217132412ce807b8ebb29118"} Feb 02 17:26:18 crc kubenswrapper[4858]: I0202 17:26:18.461207 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-zww4k" podUID="84734edc-960c-4a16-9281-b10a1dc0a710" containerName="console" containerID="cri-o://1d176a6cf98c50454b705c1a2e73aed7240dfa8d3e91d557caba1f61d6bf3fff" gracePeriod=15 Feb 02 17:26:18 crc kubenswrapper[4858]: I0202 17:26:18.953327 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-zww4k_84734edc-960c-4a16-9281-b10a1dc0a710/console/0.log" Feb 02 17:26:18 crc kubenswrapper[4858]: I0202 17:26:18.953690 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zww4k" Feb 02 17:26:19 crc kubenswrapper[4858]: I0202 17:26:19.117987 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2t2dr\" (UniqueName: \"kubernetes.io/projected/84734edc-960c-4a16-9281-b10a1dc0a710-kube-api-access-2t2dr\") pod \"84734edc-960c-4a16-9281-b10a1dc0a710\" (UID: \"84734edc-960c-4a16-9281-b10a1dc0a710\") " Feb 02 17:26:19 crc kubenswrapper[4858]: I0202 17:26:19.118098 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/84734edc-960c-4a16-9281-b10a1dc0a710-console-serving-cert\") pod \"84734edc-960c-4a16-9281-b10a1dc0a710\" (UID: \"84734edc-960c-4a16-9281-b10a1dc0a710\") " Feb 02 17:26:19 crc kubenswrapper[4858]: I0202 17:26:19.118122 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/84734edc-960c-4a16-9281-b10a1dc0a710-service-ca\") pod \"84734edc-960c-4a16-9281-b10a1dc0a710\" (UID: \"84734edc-960c-4a16-9281-b10a1dc0a710\") " Feb 02 17:26:19 crc kubenswrapper[4858]: I0202 17:26:19.118138 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/84734edc-960c-4a16-9281-b10a1dc0a710-oauth-serving-cert\") pod \"84734edc-960c-4a16-9281-b10a1dc0a710\" (UID: \"84734edc-960c-4a16-9281-b10a1dc0a710\") " Feb 02 17:26:19 crc kubenswrapper[4858]: I0202 17:26:19.118187 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84734edc-960c-4a16-9281-b10a1dc0a710-trusted-ca-bundle\") pod \"84734edc-960c-4a16-9281-b10a1dc0a710\" (UID: \"84734edc-960c-4a16-9281-b10a1dc0a710\") " Feb 02 17:26:19 crc kubenswrapper[4858]: I0202 17:26:19.118220 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/84734edc-960c-4a16-9281-b10a1dc0a710-console-config\") pod \"84734edc-960c-4a16-9281-b10a1dc0a710\" (UID: \"84734edc-960c-4a16-9281-b10a1dc0a710\") " Feb 02 17:26:19 crc kubenswrapper[4858]: I0202 17:26:19.118248 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/84734edc-960c-4a16-9281-b10a1dc0a710-console-oauth-config\") pod \"84734edc-960c-4a16-9281-b10a1dc0a710\" (UID: \"84734edc-960c-4a16-9281-b10a1dc0a710\") " Feb 02 17:26:19 crc kubenswrapper[4858]: I0202 17:26:19.119209 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84734edc-960c-4a16-9281-b10a1dc0a710-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "84734edc-960c-4a16-9281-b10a1dc0a710" (UID: "84734edc-960c-4a16-9281-b10a1dc0a710"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:26:19 crc kubenswrapper[4858]: I0202 17:26:19.119229 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84734edc-960c-4a16-9281-b10a1dc0a710-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "84734edc-960c-4a16-9281-b10a1dc0a710" (UID: "84734edc-960c-4a16-9281-b10a1dc0a710"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:26:19 crc kubenswrapper[4858]: I0202 17:26:19.119706 4858 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/84734edc-960c-4a16-9281-b10a1dc0a710-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 17:26:19 crc kubenswrapper[4858]: I0202 17:26:19.119724 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84734edc-960c-4a16-9281-b10a1dc0a710-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:26:19 crc kubenswrapper[4858]: I0202 17:26:19.119812 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84734edc-960c-4a16-9281-b10a1dc0a710-console-config" (OuterVolumeSpecName: "console-config") pod "84734edc-960c-4a16-9281-b10a1dc0a710" (UID: "84734edc-960c-4a16-9281-b10a1dc0a710"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:26:19 crc kubenswrapper[4858]: I0202 17:26:19.120023 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84734edc-960c-4a16-9281-b10a1dc0a710-service-ca" (OuterVolumeSpecName: "service-ca") pod "84734edc-960c-4a16-9281-b10a1dc0a710" (UID: "84734edc-960c-4a16-9281-b10a1dc0a710"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:26:19 crc kubenswrapper[4858]: I0202 17:26:19.125141 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84734edc-960c-4a16-9281-b10a1dc0a710-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "84734edc-960c-4a16-9281-b10a1dc0a710" (UID: "84734edc-960c-4a16-9281-b10a1dc0a710"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:26:19 crc kubenswrapper[4858]: I0202 17:26:19.126175 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84734edc-960c-4a16-9281-b10a1dc0a710-kube-api-access-2t2dr" (OuterVolumeSpecName: "kube-api-access-2t2dr") pod "84734edc-960c-4a16-9281-b10a1dc0a710" (UID: "84734edc-960c-4a16-9281-b10a1dc0a710"). InnerVolumeSpecName "kube-api-access-2t2dr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:26:19 crc kubenswrapper[4858]: I0202 17:26:19.127550 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84734edc-960c-4a16-9281-b10a1dc0a710-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "84734edc-960c-4a16-9281-b10a1dc0a710" (UID: "84734edc-960c-4a16-9281-b10a1dc0a710"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:26:19 crc kubenswrapper[4858]: I0202 17:26:19.221509 4858 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/84734edc-960c-4a16-9281-b10a1dc0a710-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:26:19 crc kubenswrapper[4858]: I0202 17:26:19.221563 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2t2dr\" (UniqueName: \"kubernetes.io/projected/84734edc-960c-4a16-9281-b10a1dc0a710-kube-api-access-2t2dr\") on node \"crc\" DevicePath \"\"" Feb 02 17:26:19 crc kubenswrapper[4858]: I0202 17:26:19.221583 4858 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/84734edc-960c-4a16-9281-b10a1dc0a710-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 17:26:19 crc kubenswrapper[4858]: I0202 17:26:19.221600 4858 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/84734edc-960c-4a16-9281-b10a1dc0a710-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 17:26:19 crc kubenswrapper[4858]: I0202 17:26:19.221616 4858 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/84734edc-960c-4a16-9281-b10a1dc0a710-console-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:26:19 crc kubenswrapper[4858]: I0202 17:26:19.555020 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-zww4k_84734edc-960c-4a16-9281-b10a1dc0a710/console/0.log" Feb 02 17:26:19 crc kubenswrapper[4858]: I0202 17:26:19.555058 4858 generic.go:334] "Generic (PLEG): container finished" podID="84734edc-960c-4a16-9281-b10a1dc0a710" containerID="1d176a6cf98c50454b705c1a2e73aed7240dfa8d3e91d557caba1f61d6bf3fff" exitCode=2 Feb 02 17:26:19 crc kubenswrapper[4858]: I0202 17:26:19.555084 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zww4k" event={"ID":"84734edc-960c-4a16-9281-b10a1dc0a710","Type":"ContainerDied","Data":"1d176a6cf98c50454b705c1a2e73aed7240dfa8d3e91d557caba1f61d6bf3fff"} Feb 02 17:26:19 crc kubenswrapper[4858]: I0202 17:26:19.555108 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zww4k" event={"ID":"84734edc-960c-4a16-9281-b10a1dc0a710","Type":"ContainerDied","Data":"a4ad7a5ed785515bc25b2b1e07c695b3275a1885dbd834ffbb50bfce914d62dd"} Feb 02 17:26:19 crc kubenswrapper[4858]: I0202 17:26:19.555125 4858 scope.go:117] "RemoveContainer" containerID="1d176a6cf98c50454b705c1a2e73aed7240dfa8d3e91d557caba1f61d6bf3fff" Feb 02 17:26:19 crc kubenswrapper[4858]: I0202 17:26:19.555207 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zww4k" Feb 02 17:26:19 crc kubenswrapper[4858]: I0202 17:26:19.597660 4858 scope.go:117] "RemoveContainer" containerID="1d176a6cf98c50454b705c1a2e73aed7240dfa8d3e91d557caba1f61d6bf3fff" Feb 02 17:26:19 crc kubenswrapper[4858]: E0202 17:26:19.598411 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d176a6cf98c50454b705c1a2e73aed7240dfa8d3e91d557caba1f61d6bf3fff\": container with ID starting with 1d176a6cf98c50454b705c1a2e73aed7240dfa8d3e91d557caba1f61d6bf3fff not found: ID does not exist" containerID="1d176a6cf98c50454b705c1a2e73aed7240dfa8d3e91d557caba1f61d6bf3fff" Feb 02 17:26:19 crc kubenswrapper[4858]: I0202 17:26:19.598489 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d176a6cf98c50454b705c1a2e73aed7240dfa8d3e91d557caba1f61d6bf3fff"} err="failed to get container status \"1d176a6cf98c50454b705c1a2e73aed7240dfa8d3e91d557caba1f61d6bf3fff\": rpc error: code = NotFound desc = could not find container \"1d176a6cf98c50454b705c1a2e73aed7240dfa8d3e91d557caba1f61d6bf3fff\": container with ID starting with 1d176a6cf98c50454b705c1a2e73aed7240dfa8d3e91d557caba1f61d6bf3fff not found: ID does not exist" Feb 02 17:26:19 crc kubenswrapper[4858]: I0202 17:26:19.600181 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-zww4k"] Feb 02 17:26:19 crc kubenswrapper[4858]: I0202 17:26:19.605235 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-zww4k"] Feb 02 17:26:20 crc kubenswrapper[4858]: I0202 17:26:20.430485 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84734edc-960c-4a16-9281-b10a1dc0a710" path="/var/lib/kubelet/pods/84734edc-960c-4a16-9281-b10a1dc0a710/volumes" Feb 02 17:26:20 crc kubenswrapper[4858]: I0202 17:26:20.568488 4858 generic.go:334] "Generic (PLEG): container finished" podID="10d6ebd8-c224-43f2-b27c-bb5944ad819d" containerID="5ce064ac96839b50b07ca1676a433a91de4f6e96cd32eae1ab05b605154e4068" exitCode=0 Feb 02 17:26:20 crc kubenswrapper[4858]: I0202 17:26:20.568561 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmhk8m" event={"ID":"10d6ebd8-c224-43f2-b27c-bb5944ad819d","Type":"ContainerDied","Data":"5ce064ac96839b50b07ca1676a433a91de4f6e96cd32eae1ab05b605154e4068"} Feb 02 17:26:21 crc kubenswrapper[4858]: I0202 17:26:21.579106 4858 generic.go:334] "Generic (PLEG): container finished" podID="10d6ebd8-c224-43f2-b27c-bb5944ad819d" containerID="7cf49ab16ff8da2380884bfb7d88a1686bbaad59eac3fb54ac76a3126b5edf37" exitCode=0 Feb 02 17:26:21 crc kubenswrapper[4858]: I0202 17:26:21.579165 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmhk8m" event={"ID":"10d6ebd8-c224-43f2-b27c-bb5944ad819d","Type":"ContainerDied","Data":"7cf49ab16ff8da2380884bfb7d88a1686bbaad59eac3fb54ac76a3126b5edf37"} Feb 02 17:26:22 crc kubenswrapper[4858]: I0202 17:26:22.882477 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmhk8m" Feb 02 17:26:23 crc kubenswrapper[4858]: I0202 17:26:23.067551 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/10d6ebd8-c224-43f2-b27c-bb5944ad819d-bundle\") pod \"10d6ebd8-c224-43f2-b27c-bb5944ad819d\" (UID: \"10d6ebd8-c224-43f2-b27c-bb5944ad819d\") " Feb 02 17:26:23 crc kubenswrapper[4858]: I0202 17:26:23.067647 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/10d6ebd8-c224-43f2-b27c-bb5944ad819d-util\") pod \"10d6ebd8-c224-43f2-b27c-bb5944ad819d\" (UID: \"10d6ebd8-c224-43f2-b27c-bb5944ad819d\") " Feb 02 17:26:23 crc kubenswrapper[4858]: I0202 17:26:23.067812 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tcsd\" (UniqueName: \"kubernetes.io/projected/10d6ebd8-c224-43f2-b27c-bb5944ad819d-kube-api-access-6tcsd\") pod \"10d6ebd8-c224-43f2-b27c-bb5944ad819d\" (UID: \"10d6ebd8-c224-43f2-b27c-bb5944ad819d\") " Feb 02 17:26:23 crc kubenswrapper[4858]: I0202 17:26:23.069598 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10d6ebd8-c224-43f2-b27c-bb5944ad819d-bundle" (OuterVolumeSpecName: "bundle") pod "10d6ebd8-c224-43f2-b27c-bb5944ad819d" (UID: "10d6ebd8-c224-43f2-b27c-bb5944ad819d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:26:23 crc kubenswrapper[4858]: I0202 17:26:23.079213 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10d6ebd8-c224-43f2-b27c-bb5944ad819d-kube-api-access-6tcsd" (OuterVolumeSpecName: "kube-api-access-6tcsd") pod "10d6ebd8-c224-43f2-b27c-bb5944ad819d" (UID: "10d6ebd8-c224-43f2-b27c-bb5944ad819d"). InnerVolumeSpecName "kube-api-access-6tcsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:26:23 crc kubenswrapper[4858]: I0202 17:26:23.081759 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10d6ebd8-c224-43f2-b27c-bb5944ad819d-util" (OuterVolumeSpecName: "util") pod "10d6ebd8-c224-43f2-b27c-bb5944ad819d" (UID: "10d6ebd8-c224-43f2-b27c-bb5944ad819d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:26:23 crc kubenswrapper[4858]: I0202 17:26:23.169400 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tcsd\" (UniqueName: \"kubernetes.io/projected/10d6ebd8-c224-43f2-b27c-bb5944ad819d-kube-api-access-6tcsd\") on node \"crc\" DevicePath \"\"" Feb 02 17:26:23 crc kubenswrapper[4858]: I0202 17:26:23.169436 4858 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/10d6ebd8-c224-43f2-b27c-bb5944ad819d-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:26:23 crc kubenswrapper[4858]: I0202 17:26:23.169445 4858 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/10d6ebd8-c224-43f2-b27c-bb5944ad819d-util\") on node \"crc\" DevicePath \"\"" Feb 02 17:26:23 crc kubenswrapper[4858]: I0202 17:26:23.598461 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmhk8m" event={"ID":"10d6ebd8-c224-43f2-b27c-bb5944ad819d","Type":"ContainerDied","Data":"602349005b90434a65c522a06ecc481a788c8b4d217132412ce807b8ebb29118"} Feb 02 17:26:23 crc kubenswrapper[4858]: I0202 17:26:23.598520 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="602349005b90434a65c522a06ecc481a788c8b4d217132412ce807b8ebb29118" Feb 02 17:26:23 crc kubenswrapper[4858]: I0202 17:26:23.598578 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmhk8m" Feb 02 17:26:31 crc kubenswrapper[4858]: I0202 17:26:31.774267 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-749875bd8b-wr4x9"] Feb 02 17:26:31 crc kubenswrapper[4858]: E0202 17:26:31.775706 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10d6ebd8-c224-43f2-b27c-bb5944ad819d" containerName="pull" Feb 02 17:26:31 crc kubenswrapper[4858]: I0202 17:26:31.775783 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="10d6ebd8-c224-43f2-b27c-bb5944ad819d" containerName="pull" Feb 02 17:26:31 crc kubenswrapper[4858]: E0202 17:26:31.775847 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84734edc-960c-4a16-9281-b10a1dc0a710" containerName="console" Feb 02 17:26:31 crc kubenswrapper[4858]: I0202 17:26:31.775896 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="84734edc-960c-4a16-9281-b10a1dc0a710" containerName="console" Feb 02 17:26:31 crc kubenswrapper[4858]: E0202 17:26:31.775945 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10d6ebd8-c224-43f2-b27c-bb5944ad819d" containerName="extract" Feb 02 17:26:31 crc kubenswrapper[4858]: I0202 17:26:31.776009 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="10d6ebd8-c224-43f2-b27c-bb5944ad819d" containerName="extract" Feb 02 17:26:31 crc kubenswrapper[4858]: E0202 17:26:31.776064 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10d6ebd8-c224-43f2-b27c-bb5944ad819d" containerName="util" Feb 02 17:26:31 crc kubenswrapper[4858]: I0202 17:26:31.776111 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="10d6ebd8-c224-43f2-b27c-bb5944ad819d" containerName="util" Feb 02 17:26:31 crc kubenswrapper[4858]: I0202 17:26:31.776260 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="10d6ebd8-c224-43f2-b27c-bb5944ad819d" containerName="extract" Feb 02 17:26:31 crc kubenswrapper[4858]: I0202 17:26:31.776318 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="84734edc-960c-4a16-9281-b10a1dc0a710" containerName="console" Feb 02 17:26:31 crc kubenswrapper[4858]: I0202 17:26:31.776760 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-749875bd8b-wr4x9" Feb 02 17:26:31 crc kubenswrapper[4858]: I0202 17:26:31.778817 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 02 17:26:31 crc kubenswrapper[4858]: I0202 17:26:31.779178 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 02 17:26:31 crc kubenswrapper[4858]: I0202 17:26:31.779202 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-7ktx7" Feb 02 17:26:31 crc kubenswrapper[4858]: I0202 17:26:31.779283 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 02 17:26:31 crc kubenswrapper[4858]: I0202 17:26:31.780839 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 02 17:26:31 crc kubenswrapper[4858]: I0202 17:26:31.801164 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-749875bd8b-wr4x9"] Feb 02 17:26:31 crc kubenswrapper[4858]: I0202 17:26:31.804717 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3ba8c286-d0ce-40d1-b759-9d983474210b-webhook-cert\") pod \"metallb-operator-controller-manager-749875bd8b-wr4x9\" (UID: \"3ba8c286-d0ce-40d1-b759-9d983474210b\") " pod="metallb-system/metallb-operator-controller-manager-749875bd8b-wr4x9" Feb 02 17:26:31 crc kubenswrapper[4858]: I0202 17:26:31.804899 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxqfx\" (UniqueName: \"kubernetes.io/projected/3ba8c286-d0ce-40d1-b759-9d983474210b-kube-api-access-nxqfx\") pod \"metallb-operator-controller-manager-749875bd8b-wr4x9\" (UID: \"3ba8c286-d0ce-40d1-b759-9d983474210b\") " pod="metallb-system/metallb-operator-controller-manager-749875bd8b-wr4x9" Feb 02 17:26:31 crc kubenswrapper[4858]: I0202 17:26:31.804986 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3ba8c286-d0ce-40d1-b759-9d983474210b-apiservice-cert\") pod \"metallb-operator-controller-manager-749875bd8b-wr4x9\" (UID: \"3ba8c286-d0ce-40d1-b759-9d983474210b\") " pod="metallb-system/metallb-operator-controller-manager-749875bd8b-wr4x9" Feb 02 17:26:31 crc kubenswrapper[4858]: I0202 17:26:31.905834 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3ba8c286-d0ce-40d1-b759-9d983474210b-apiservice-cert\") pod \"metallb-operator-controller-manager-749875bd8b-wr4x9\" (UID: \"3ba8c286-d0ce-40d1-b759-9d983474210b\") " pod="metallb-system/metallb-operator-controller-manager-749875bd8b-wr4x9" Feb 02 17:26:31 crc kubenswrapper[4858]: I0202 17:26:31.906220 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3ba8c286-d0ce-40d1-b759-9d983474210b-webhook-cert\") pod \"metallb-operator-controller-manager-749875bd8b-wr4x9\" (UID: \"3ba8c286-d0ce-40d1-b759-9d983474210b\") " pod="metallb-system/metallb-operator-controller-manager-749875bd8b-wr4x9" Feb 02 17:26:31 crc kubenswrapper[4858]: I0202 17:26:31.906391 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxqfx\" (UniqueName: \"kubernetes.io/projected/3ba8c286-d0ce-40d1-b759-9d983474210b-kube-api-access-nxqfx\") pod \"metallb-operator-controller-manager-749875bd8b-wr4x9\" (UID: \"3ba8c286-d0ce-40d1-b759-9d983474210b\") " pod="metallb-system/metallb-operator-controller-manager-749875bd8b-wr4x9" Feb 02 17:26:31 crc kubenswrapper[4858]: I0202 17:26:31.913890 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3ba8c286-d0ce-40d1-b759-9d983474210b-apiservice-cert\") pod \"metallb-operator-controller-manager-749875bd8b-wr4x9\" (UID: \"3ba8c286-d0ce-40d1-b759-9d983474210b\") " pod="metallb-system/metallb-operator-controller-manager-749875bd8b-wr4x9" Feb 02 17:26:31 crc kubenswrapper[4858]: I0202 17:26:31.922852 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3ba8c286-d0ce-40d1-b759-9d983474210b-webhook-cert\") pod \"metallb-operator-controller-manager-749875bd8b-wr4x9\" (UID: \"3ba8c286-d0ce-40d1-b759-9d983474210b\") " pod="metallb-system/metallb-operator-controller-manager-749875bd8b-wr4x9" Feb 02 17:26:31 crc kubenswrapper[4858]: I0202 17:26:31.924559 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxqfx\" (UniqueName: \"kubernetes.io/projected/3ba8c286-d0ce-40d1-b759-9d983474210b-kube-api-access-nxqfx\") pod \"metallb-operator-controller-manager-749875bd8b-wr4x9\" (UID: \"3ba8c286-d0ce-40d1-b759-9d983474210b\") " pod="metallb-system/metallb-operator-controller-manager-749875bd8b-wr4x9" Feb 02 17:26:32 crc kubenswrapper[4858]: I0202 17:26:32.094081 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-749875bd8b-wr4x9" Feb 02 17:26:32 crc kubenswrapper[4858]: I0202 17:26:32.110332 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5854c4649f-zl8j4"] Feb 02 17:26:32 crc kubenswrapper[4858]: I0202 17:26:32.111154 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5854c4649f-zl8j4" Feb 02 17:26:32 crc kubenswrapper[4858]: I0202 17:26:32.115353 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 02 17:26:32 crc kubenswrapper[4858]: I0202 17:26:32.115445 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-sj4l7" Feb 02 17:26:32 crc kubenswrapper[4858]: I0202 17:26:32.115669 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 02 17:26:32 crc kubenswrapper[4858]: I0202 17:26:32.136108 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5854c4649f-zl8j4"] Feb 02 17:26:32 crc kubenswrapper[4858]: I0202 17:26:32.212548 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5efe8813-bcae-42c6-be1a-6f60809e7e3e-webhook-cert\") pod \"metallb-operator-webhook-server-5854c4649f-zl8j4\" (UID: \"5efe8813-bcae-42c6-be1a-6f60809e7e3e\") " pod="metallb-system/metallb-operator-webhook-server-5854c4649f-zl8j4" Feb 02 17:26:32 crc kubenswrapper[4858]: I0202 17:26:32.212627 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5efe8813-bcae-42c6-be1a-6f60809e7e3e-apiservice-cert\") pod \"metallb-operator-webhook-server-5854c4649f-zl8j4\" (UID: \"5efe8813-bcae-42c6-be1a-6f60809e7e3e\") " pod="metallb-system/metallb-operator-webhook-server-5854c4649f-zl8j4" Feb 02 17:26:32 crc kubenswrapper[4858]: I0202 17:26:32.212657 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzx6h\" (UniqueName: \"kubernetes.io/projected/5efe8813-bcae-42c6-be1a-6f60809e7e3e-kube-api-access-pzx6h\") pod \"metallb-operator-webhook-server-5854c4649f-zl8j4\" (UID: \"5efe8813-bcae-42c6-be1a-6f60809e7e3e\") " pod="metallb-system/metallb-operator-webhook-server-5854c4649f-zl8j4" Feb 02 17:26:32 crc kubenswrapper[4858]: I0202 17:26:32.314100 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5efe8813-bcae-42c6-be1a-6f60809e7e3e-apiservice-cert\") pod \"metallb-operator-webhook-server-5854c4649f-zl8j4\" (UID: \"5efe8813-bcae-42c6-be1a-6f60809e7e3e\") " pod="metallb-system/metallb-operator-webhook-server-5854c4649f-zl8j4" Feb 02 17:26:32 crc kubenswrapper[4858]: I0202 17:26:32.314317 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzx6h\" (UniqueName: \"kubernetes.io/projected/5efe8813-bcae-42c6-be1a-6f60809e7e3e-kube-api-access-pzx6h\") pod \"metallb-operator-webhook-server-5854c4649f-zl8j4\" (UID: \"5efe8813-bcae-42c6-be1a-6f60809e7e3e\") " pod="metallb-system/metallb-operator-webhook-server-5854c4649f-zl8j4" Feb 02 17:26:32 crc kubenswrapper[4858]: I0202 17:26:32.314416 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5efe8813-bcae-42c6-be1a-6f60809e7e3e-webhook-cert\") pod \"metallb-operator-webhook-server-5854c4649f-zl8j4\" (UID: \"5efe8813-bcae-42c6-be1a-6f60809e7e3e\") " pod="metallb-system/metallb-operator-webhook-server-5854c4649f-zl8j4" Feb 02 17:26:32 crc kubenswrapper[4858]: I0202 17:26:32.335806 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5efe8813-bcae-42c6-be1a-6f60809e7e3e-apiservice-cert\") pod \"metallb-operator-webhook-server-5854c4649f-zl8j4\" (UID: \"5efe8813-bcae-42c6-be1a-6f60809e7e3e\") " pod="metallb-system/metallb-operator-webhook-server-5854c4649f-zl8j4" Feb 02 17:26:32 crc kubenswrapper[4858]: I0202 17:26:32.335851 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5efe8813-bcae-42c6-be1a-6f60809e7e3e-webhook-cert\") pod \"metallb-operator-webhook-server-5854c4649f-zl8j4\" (UID: \"5efe8813-bcae-42c6-be1a-6f60809e7e3e\") " pod="metallb-system/metallb-operator-webhook-server-5854c4649f-zl8j4" Feb 02 17:26:32 crc kubenswrapper[4858]: I0202 17:26:32.344784 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzx6h\" (UniqueName: \"kubernetes.io/projected/5efe8813-bcae-42c6-be1a-6f60809e7e3e-kube-api-access-pzx6h\") pod \"metallb-operator-webhook-server-5854c4649f-zl8j4\" (UID: \"5efe8813-bcae-42c6-be1a-6f60809e7e3e\") " pod="metallb-system/metallb-operator-webhook-server-5854c4649f-zl8j4" Feb 02 17:26:32 crc kubenswrapper[4858]: I0202 17:26:32.463665 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5854c4649f-zl8j4" Feb 02 17:26:32 crc kubenswrapper[4858]: I0202 17:26:32.483672 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-749875bd8b-wr4x9"] Feb 02 17:26:32 crc kubenswrapper[4858]: W0202 17:26:32.494094 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ba8c286_d0ce_40d1_b759_9d983474210b.slice/crio-64a4bcd50cbab3311afef4df676a45e591af9465f3ed0a9e6beef6dfed9c499f WatchSource:0}: Error finding container 64a4bcd50cbab3311afef4df676a45e591af9465f3ed0a9e6beef6dfed9c499f: Status 404 returned error can't find the container with id 64a4bcd50cbab3311afef4df676a45e591af9465f3ed0a9e6beef6dfed9c499f Feb 02 17:26:32 crc kubenswrapper[4858]: I0202 17:26:32.654257 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-749875bd8b-wr4x9" event={"ID":"3ba8c286-d0ce-40d1-b759-9d983474210b","Type":"ContainerStarted","Data":"64a4bcd50cbab3311afef4df676a45e591af9465f3ed0a9e6beef6dfed9c499f"} Feb 02 17:26:32 crc kubenswrapper[4858]: I0202 17:26:32.655408 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5854c4649f-zl8j4"] Feb 02 17:26:32 crc kubenswrapper[4858]: W0202 17:26:32.662043 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5efe8813_bcae_42c6_be1a_6f60809e7e3e.slice/crio-ab24c88fb7fb3dd2f0fc2e20f5259b1f27ab9c72e85fddc7f79454098c8e9529 WatchSource:0}: Error finding container ab24c88fb7fb3dd2f0fc2e20f5259b1f27ab9c72e85fddc7f79454098c8e9529: Status 404 returned error can't find the container with id ab24c88fb7fb3dd2f0fc2e20f5259b1f27ab9c72e85fddc7f79454098c8e9529 Feb 02 17:26:33 crc kubenswrapper[4858]: I0202 17:26:33.659888 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5854c4649f-zl8j4" event={"ID":"5efe8813-bcae-42c6-be1a-6f60809e7e3e","Type":"ContainerStarted","Data":"ab24c88fb7fb3dd2f0fc2e20f5259b1f27ab9c72e85fddc7f79454098c8e9529"} Feb 02 17:26:35 crc kubenswrapper[4858]: I0202 17:26:35.674011 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-749875bd8b-wr4x9" event={"ID":"3ba8c286-d0ce-40d1-b759-9d983474210b","Type":"ContainerStarted","Data":"5367f34eb04f47b417d56dd3498084ae77513b5f648b73c8471164f085a55c4f"} Feb 02 17:26:35 crc kubenswrapper[4858]: I0202 17:26:35.674366 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-749875bd8b-wr4x9" Feb 02 17:26:37 crc kubenswrapper[4858]: I0202 17:26:37.684450 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5854c4649f-zl8j4" event={"ID":"5efe8813-bcae-42c6-be1a-6f60809e7e3e","Type":"ContainerStarted","Data":"1a27dbd024f3fba7b061242ad64505a426c5a13db0b99332587104b3a719f27a"} Feb 02 17:26:37 crc kubenswrapper[4858]: I0202 17:26:37.685045 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5854c4649f-zl8j4" Feb 02 17:26:37 crc kubenswrapper[4858]: I0202 17:26:37.705127 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-749875bd8b-wr4x9" podStartSLOduration=3.91000608 podStartE2EDuration="6.705111316s" podCreationTimestamp="2026-02-02 17:26:31 +0000 UTC" firstStartedPulling="2026-02-02 17:26:32.495813222 +0000 UTC m=+693.648228487" lastFinishedPulling="2026-02-02 17:26:35.290918458 +0000 UTC m=+696.443333723" observedRunningTime="2026-02-02 17:26:35.692520979 +0000 UTC m=+696.844936244" watchObservedRunningTime="2026-02-02 17:26:37.705111316 +0000 UTC m=+698.857526581" Feb 02 17:26:52 crc kubenswrapper[4858]: I0202 17:26:52.470366 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5854c4649f-zl8j4" Feb 02 17:26:52 crc kubenswrapper[4858]: I0202 17:26:52.490310 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5854c4649f-zl8j4" podStartSLOduration=16.337445927 podStartE2EDuration="20.490290146s" podCreationTimestamp="2026-02-02 17:26:32 +0000 UTC" firstStartedPulling="2026-02-02 17:26:32.666092539 +0000 UTC m=+693.818507804" lastFinishedPulling="2026-02-02 17:26:36.818936758 +0000 UTC m=+697.971352023" observedRunningTime="2026-02-02 17:26:37.705845457 +0000 UTC m=+698.858260752" watchObservedRunningTime="2026-02-02 17:26:52.490290146 +0000 UTC m=+713.642705421" Feb 02 17:27:12 crc kubenswrapper[4858]: I0202 17:27:12.096813 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-749875bd8b-wr4x9" Feb 02 17:27:12 crc kubenswrapper[4858]: I0202 17:27:12.769609 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-svlq2"] Feb 02 17:27:12 crc kubenswrapper[4858]: I0202 17:27:12.772509 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-svlq2" Feb 02 17:27:12 crc kubenswrapper[4858]: I0202 17:27:12.774305 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-pgxzh"] Feb 02 17:27:12 crc kubenswrapper[4858]: I0202 17:27:12.775145 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-pgxzh" Feb 02 17:27:12 crc kubenswrapper[4858]: I0202 17:27:12.778350 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 02 17:27:12 crc kubenswrapper[4858]: I0202 17:27:12.778802 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 02 17:27:12 crc kubenswrapper[4858]: I0202 17:27:12.781130 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 02 17:27:12 crc kubenswrapper[4858]: I0202 17:27:12.782432 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-6hxhr" Feb 02 17:27:12 crc kubenswrapper[4858]: I0202 17:27:12.787557 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-pgxzh"] Feb 02 17:27:12 crc kubenswrapper[4858]: I0202 17:27:12.841952 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-dk7fw"] Feb 02 17:27:12 crc kubenswrapper[4858]: I0202 17:27:12.844870 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-dk7fw" Feb 02 17:27:12 crc kubenswrapper[4858]: I0202 17:27:12.847343 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 02 17:27:12 crc kubenswrapper[4858]: I0202 17:27:12.847454 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-7wrcr" Feb 02 17:27:12 crc kubenswrapper[4858]: I0202 17:27:12.847687 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 02 17:27:12 crc kubenswrapper[4858]: I0202 17:27:12.849688 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 02 17:27:12 crc kubenswrapper[4858]: I0202 17:27:12.872397 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-6bvx9"] Feb 02 17:27:12 crc kubenswrapper[4858]: I0202 17:27:12.873751 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-6bvx9" Feb 02 17:27:12 crc kubenswrapper[4858]: I0202 17:27:12.876670 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 02 17:27:12 crc kubenswrapper[4858]: I0202 17:27:12.887901 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-6bvx9"] Feb 02 17:27:12 crc kubenswrapper[4858]: I0202 17:27:12.898105 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/aa24ded3-4a92-4c89-bade-68547bdca597-frr-conf\") pod \"frr-k8s-svlq2\" (UID: \"aa24ded3-4a92-4c89-bade-68547bdca597\") " pod="metallb-system/frr-k8s-svlq2" Feb 02 17:27:12 crc kubenswrapper[4858]: I0202 17:27:12.898161 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/aa24ded3-4a92-4c89-bade-68547bdca597-frr-startup\") pod \"frr-k8s-svlq2\" (UID: \"aa24ded3-4a92-4c89-bade-68547bdca597\") " pod="metallb-system/frr-k8s-svlq2" Feb 02 17:27:12 crc kubenswrapper[4858]: I0202 17:27:12.898214 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/aa24ded3-4a92-4c89-bade-68547bdca597-reloader\") pod \"frr-k8s-svlq2\" (UID: \"aa24ded3-4a92-4c89-bade-68547bdca597\") " pod="metallb-system/frr-k8s-svlq2" Feb 02 17:27:12 crc kubenswrapper[4858]: I0202 17:27:12.898247 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfjm9\" (UniqueName: \"kubernetes.io/projected/1226b394-7ee5-4947-8d99-532106bb7baa-kube-api-access-pfjm9\") pod \"frr-k8s-webhook-server-7df86c4f6c-pgxzh\" (UID: \"1226b394-7ee5-4947-8d99-532106bb7baa\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-pgxzh" Feb 02 17:27:12 crc kubenswrapper[4858]: I0202 17:27:12.898299 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/aa24ded3-4a92-4c89-bade-68547bdca597-metrics\") pod \"frr-k8s-svlq2\" (UID: \"aa24ded3-4a92-4c89-bade-68547bdca597\") " pod="metallb-system/frr-k8s-svlq2" Feb 02 17:27:12 crc kubenswrapper[4858]: I0202 17:27:12.898367 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1226b394-7ee5-4947-8d99-532106bb7baa-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-pgxzh\" (UID: \"1226b394-7ee5-4947-8d99-532106bb7baa\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-pgxzh" Feb 02 17:27:12 crc kubenswrapper[4858]: I0202 17:27:12.898386 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa24ded3-4a92-4c89-bade-68547bdca597-metrics-certs\") pod \"frr-k8s-svlq2\" (UID: \"aa24ded3-4a92-4c89-bade-68547bdca597\") " pod="metallb-system/frr-k8s-svlq2" Feb 02 17:27:12 crc kubenswrapper[4858]: I0202 17:27:12.898420 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbv5r\" (UniqueName: \"kubernetes.io/projected/aa24ded3-4a92-4c89-bade-68547bdca597-kube-api-access-rbv5r\") pod \"frr-k8s-svlq2\" (UID: \"aa24ded3-4a92-4c89-bade-68547bdca597\") " pod="metallb-system/frr-k8s-svlq2" Feb 02 17:27:12 crc kubenswrapper[4858]: I0202 17:27:12.898439 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/aa24ded3-4a92-4c89-bade-68547bdca597-frr-sockets\") pod \"frr-k8s-svlq2\" (UID: \"aa24ded3-4a92-4c89-bade-68547bdca597\") " pod="metallb-system/frr-k8s-svlq2" Feb 02 17:27:12 crc kubenswrapper[4858]: I0202 17:27:12.999521 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfjm9\" (UniqueName: \"kubernetes.io/projected/1226b394-7ee5-4947-8d99-532106bb7baa-kube-api-access-pfjm9\") pod \"frr-k8s-webhook-server-7df86c4f6c-pgxzh\" (UID: \"1226b394-7ee5-4947-8d99-532106bb7baa\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-pgxzh" Feb 02 17:27:12 crc kubenswrapper[4858]: I0202 17:27:12.999566 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/aa24ded3-4a92-4c89-bade-68547bdca597-metrics\") pod \"frr-k8s-svlq2\" (UID: \"aa24ded3-4a92-4c89-bade-68547bdca597\") " pod="metallb-system/frr-k8s-svlq2" Feb 02 17:27:12 crc kubenswrapper[4858]: I0202 17:27:12.999591 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c72561ce-1db8-4883-97fe-488222b2f232-metallb-excludel2\") pod \"speaker-dk7fw\" (UID: \"c72561ce-1db8-4883-97fe-488222b2f232\") " pod="metallb-system/speaker-dk7fw" Feb 02 17:27:12 crc kubenswrapper[4858]: I0202 17:27:12.999619 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1226b394-7ee5-4947-8d99-532106bb7baa-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-pgxzh\" (UID: \"1226b394-7ee5-4947-8d99-532106bb7baa\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-pgxzh" Feb 02 17:27:12 crc kubenswrapper[4858]: I0202 17:27:12.999636 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa24ded3-4a92-4c89-bade-68547bdca597-metrics-certs\") pod \"frr-k8s-svlq2\" (UID: \"aa24ded3-4a92-4c89-bade-68547bdca597\") " pod="metallb-system/frr-k8s-svlq2" Feb 02 17:27:12 crc kubenswrapper[4858]: I0202 17:27:12.999668 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvcs6\" (UniqueName: \"kubernetes.io/projected/0b58bf4d-52bb-4876-8555-b8b403e0cbcb-kube-api-access-mvcs6\") pod \"controller-6968d8fdc4-6bvx9\" (UID: \"0b58bf4d-52bb-4876-8555-b8b403e0cbcb\") " pod="metallb-system/controller-6968d8fdc4-6bvx9" Feb 02 17:27:12 crc kubenswrapper[4858]: I0202 17:27:12.999694 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbv5r\" (UniqueName: \"kubernetes.io/projected/aa24ded3-4a92-4c89-bade-68547bdca597-kube-api-access-rbv5r\") pod \"frr-k8s-svlq2\" (UID: \"aa24ded3-4a92-4c89-bade-68547bdca597\") " pod="metallb-system/frr-k8s-svlq2" Feb 02 17:27:12 crc kubenswrapper[4858]: I0202 17:27:12.999710 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/aa24ded3-4a92-4c89-bade-68547bdca597-frr-sockets\") pod \"frr-k8s-svlq2\" (UID: \"aa24ded3-4a92-4c89-bade-68547bdca597\") " pod="metallb-system/frr-k8s-svlq2" Feb 02 17:27:12 crc kubenswrapper[4858]: I0202 17:27:12.999730 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c72561ce-1db8-4883-97fe-488222b2f232-metrics-certs\") pod \"speaker-dk7fw\" (UID: \"c72561ce-1db8-4883-97fe-488222b2f232\") " pod="metallb-system/speaker-dk7fw" Feb 02 17:27:12 crc kubenswrapper[4858]: I0202 17:27:12.999752 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/aa24ded3-4a92-4c89-bade-68547bdca597-frr-conf\") pod \"frr-k8s-svlq2\" (UID: \"aa24ded3-4a92-4c89-bade-68547bdca597\") " pod="metallb-system/frr-k8s-svlq2" Feb 02 17:27:12 crc kubenswrapper[4858]: I0202 17:27:12.999770 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/aa24ded3-4a92-4c89-bade-68547bdca597-frr-startup\") pod \"frr-k8s-svlq2\" (UID: \"aa24ded3-4a92-4c89-bade-68547bdca597\") " pod="metallb-system/frr-k8s-svlq2" Feb 02 17:27:12 crc kubenswrapper[4858]: I0202 17:27:12.999785 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0b58bf4d-52bb-4876-8555-b8b403e0cbcb-cert\") pod \"controller-6968d8fdc4-6bvx9\" (UID: \"0b58bf4d-52bb-4876-8555-b8b403e0cbcb\") " pod="metallb-system/controller-6968d8fdc4-6bvx9" Feb 02 17:27:13 crc kubenswrapper[4858]: I0202 17:27:12.999804 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c72561ce-1db8-4883-97fe-488222b2f232-memberlist\") pod \"speaker-dk7fw\" (UID: \"c72561ce-1db8-4883-97fe-488222b2f232\") " pod="metallb-system/speaker-dk7fw" Feb 02 17:27:13 crc kubenswrapper[4858]: E0202 17:27:12.999805 4858 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Feb 02 17:27:13 crc kubenswrapper[4858]: E0202 17:27:12.999885 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1226b394-7ee5-4947-8d99-532106bb7baa-cert podName:1226b394-7ee5-4947-8d99-532106bb7baa nodeName:}" failed. No retries permitted until 2026-02-02 17:27:13.499864327 +0000 UTC m=+734.652279712 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1226b394-7ee5-4947-8d99-532106bb7baa-cert") pod "frr-k8s-webhook-server-7df86c4f6c-pgxzh" (UID: "1226b394-7ee5-4947-8d99-532106bb7baa") : secret "frr-k8s-webhook-server-cert" not found Feb 02 17:27:13 crc kubenswrapper[4858]: I0202 17:27:12.999823 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4tcf\" (UniqueName: \"kubernetes.io/projected/c72561ce-1db8-4883-97fe-488222b2f232-kube-api-access-q4tcf\") pod \"speaker-dk7fw\" (UID: \"c72561ce-1db8-4883-97fe-488222b2f232\") " pod="metallb-system/speaker-dk7fw" Feb 02 17:27:13 crc kubenswrapper[4858]: I0202 17:27:13.000061 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/aa24ded3-4a92-4c89-bade-68547bdca597-reloader\") pod \"frr-k8s-svlq2\" (UID: \"aa24ded3-4a92-4c89-bade-68547bdca597\") " pod="metallb-system/frr-k8s-svlq2" Feb 02 17:27:13 crc kubenswrapper[4858]: I0202 17:27:13.000135 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b58bf4d-52bb-4876-8555-b8b403e0cbcb-metrics-certs\") pod \"controller-6968d8fdc4-6bvx9\" (UID: \"0b58bf4d-52bb-4876-8555-b8b403e0cbcb\") " pod="metallb-system/controller-6968d8fdc4-6bvx9" Feb 02 17:27:13 crc kubenswrapper[4858]: I0202 17:27:13.000173 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/aa24ded3-4a92-4c89-bade-68547bdca597-metrics\") pod \"frr-k8s-svlq2\" (UID: \"aa24ded3-4a92-4c89-bade-68547bdca597\") " pod="metallb-system/frr-k8s-svlq2" Feb 02 17:27:13 crc kubenswrapper[4858]: I0202 17:27:13.000577 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/aa24ded3-4a92-4c89-bade-68547bdca597-frr-conf\") pod \"frr-k8s-svlq2\" (UID: \"aa24ded3-4a92-4c89-bade-68547bdca597\") " pod="metallb-system/frr-k8s-svlq2" Feb 02 17:27:13 crc kubenswrapper[4858]: I0202 17:27:13.000594 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/aa24ded3-4a92-4c89-bade-68547bdca597-reloader\") pod \"frr-k8s-svlq2\" (UID: \"aa24ded3-4a92-4c89-bade-68547bdca597\") " pod="metallb-system/frr-k8s-svlq2" Feb 02 17:27:13 crc kubenswrapper[4858]: I0202 17:27:13.000687 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/aa24ded3-4a92-4c89-bade-68547bdca597-frr-sockets\") pod \"frr-k8s-svlq2\" (UID: \"aa24ded3-4a92-4c89-bade-68547bdca597\") " pod="metallb-system/frr-k8s-svlq2" Feb 02 17:27:13 crc kubenswrapper[4858]: I0202 17:27:13.001336 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/aa24ded3-4a92-4c89-bade-68547bdca597-frr-startup\") pod \"frr-k8s-svlq2\" (UID: \"aa24ded3-4a92-4c89-bade-68547bdca597\") " pod="metallb-system/frr-k8s-svlq2" Feb 02 17:27:13 crc kubenswrapper[4858]: I0202 17:27:13.009028 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/aa24ded3-4a92-4c89-bade-68547bdca597-metrics-certs\") pod \"frr-k8s-svlq2\" (UID: \"aa24ded3-4a92-4c89-bade-68547bdca597\") " pod="metallb-system/frr-k8s-svlq2" Feb 02 17:27:13 crc kubenswrapper[4858]: I0202 17:27:13.019223 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfjm9\" (UniqueName: \"kubernetes.io/projected/1226b394-7ee5-4947-8d99-532106bb7baa-kube-api-access-pfjm9\") pod \"frr-k8s-webhook-server-7df86c4f6c-pgxzh\" (UID: \"1226b394-7ee5-4947-8d99-532106bb7baa\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-pgxzh" Feb 02 17:27:13 crc kubenswrapper[4858]: I0202 17:27:13.021745 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbv5r\" (UniqueName: \"kubernetes.io/projected/aa24ded3-4a92-4c89-bade-68547bdca597-kube-api-access-rbv5r\") pod \"frr-k8s-svlq2\" (UID: \"aa24ded3-4a92-4c89-bade-68547bdca597\") " pod="metallb-system/frr-k8s-svlq2" Feb 02 17:27:13 crc kubenswrapper[4858]: I0202 17:27:13.097728 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-svlq2" Feb 02 17:27:13 crc kubenswrapper[4858]: I0202 17:27:13.101408 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c72561ce-1db8-4883-97fe-488222b2f232-memberlist\") pod \"speaker-dk7fw\" (UID: \"c72561ce-1db8-4883-97fe-488222b2f232\") " pod="metallb-system/speaker-dk7fw" Feb 02 17:27:13 crc kubenswrapper[4858]: I0202 17:27:13.101443 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4tcf\" (UniqueName: \"kubernetes.io/projected/c72561ce-1db8-4883-97fe-488222b2f232-kube-api-access-q4tcf\") pod \"speaker-dk7fw\" (UID: \"c72561ce-1db8-4883-97fe-488222b2f232\") " pod="metallb-system/speaker-dk7fw" Feb 02 17:27:13 crc kubenswrapper[4858]: I0202 17:27:13.101472 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b58bf4d-52bb-4876-8555-b8b403e0cbcb-metrics-certs\") pod \"controller-6968d8fdc4-6bvx9\" (UID: \"0b58bf4d-52bb-4876-8555-b8b403e0cbcb\") " pod="metallb-system/controller-6968d8fdc4-6bvx9" Feb 02 17:27:13 crc kubenswrapper[4858]: I0202 17:27:13.101504 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c72561ce-1db8-4883-97fe-488222b2f232-metallb-excludel2\") pod \"speaker-dk7fw\" (UID: \"c72561ce-1db8-4883-97fe-488222b2f232\") " pod="metallb-system/speaker-dk7fw" Feb 02 17:27:13 crc kubenswrapper[4858]: E0202 17:27:13.101565 4858 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 02 17:27:13 crc kubenswrapper[4858]: I0202 17:27:13.101579 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvcs6\" (UniqueName: \"kubernetes.io/projected/0b58bf4d-52bb-4876-8555-b8b403e0cbcb-kube-api-access-mvcs6\") pod \"controller-6968d8fdc4-6bvx9\" (UID: \"0b58bf4d-52bb-4876-8555-b8b403e0cbcb\") " pod="metallb-system/controller-6968d8fdc4-6bvx9" Feb 02 17:27:13 crc kubenswrapper[4858]: E0202 17:27:13.101616 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c72561ce-1db8-4883-97fe-488222b2f232-memberlist podName:c72561ce-1db8-4883-97fe-488222b2f232 nodeName:}" failed. No retries permitted until 2026-02-02 17:27:13.601600765 +0000 UTC m=+734.754016030 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/c72561ce-1db8-4883-97fe-488222b2f232-memberlist") pod "speaker-dk7fw" (UID: "c72561ce-1db8-4883-97fe-488222b2f232") : secret "metallb-memberlist" not found Feb 02 17:27:13 crc kubenswrapper[4858]: I0202 17:27:13.101633 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c72561ce-1db8-4883-97fe-488222b2f232-metrics-certs\") pod \"speaker-dk7fw\" (UID: \"c72561ce-1db8-4883-97fe-488222b2f232\") " pod="metallb-system/speaker-dk7fw" Feb 02 17:27:13 crc kubenswrapper[4858]: I0202 17:27:13.101665 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0b58bf4d-52bb-4876-8555-b8b403e0cbcb-cert\") pod \"controller-6968d8fdc4-6bvx9\" (UID: \"0b58bf4d-52bb-4876-8555-b8b403e0cbcb\") " pod="metallb-system/controller-6968d8fdc4-6bvx9" Feb 02 17:27:13 crc kubenswrapper[4858]: I0202 17:27:13.102258 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c72561ce-1db8-4883-97fe-488222b2f232-metallb-excludel2\") pod \"speaker-dk7fw\" (UID: \"c72561ce-1db8-4883-97fe-488222b2f232\") " pod="metallb-system/speaker-dk7fw" Feb 02 17:27:13 crc kubenswrapper[4858]: I0202 17:27:13.103753 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 02 17:27:13 crc kubenswrapper[4858]: I0202 17:27:13.105501 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b58bf4d-52bb-4876-8555-b8b403e0cbcb-metrics-certs\") pod \"controller-6968d8fdc4-6bvx9\" (UID: \"0b58bf4d-52bb-4876-8555-b8b403e0cbcb\") " pod="metallb-system/controller-6968d8fdc4-6bvx9" Feb 02 17:27:13 crc kubenswrapper[4858]: I0202 17:27:13.108409 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c72561ce-1db8-4883-97fe-488222b2f232-metrics-certs\") pod \"speaker-dk7fw\" (UID: \"c72561ce-1db8-4883-97fe-488222b2f232\") " pod="metallb-system/speaker-dk7fw" Feb 02 17:27:13 crc kubenswrapper[4858]: I0202 17:27:13.115176 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0b58bf4d-52bb-4876-8555-b8b403e0cbcb-cert\") pod \"controller-6968d8fdc4-6bvx9\" (UID: \"0b58bf4d-52bb-4876-8555-b8b403e0cbcb\") " pod="metallb-system/controller-6968d8fdc4-6bvx9" Feb 02 17:27:13 crc kubenswrapper[4858]: I0202 17:27:13.121390 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvcs6\" (UniqueName: \"kubernetes.io/projected/0b58bf4d-52bb-4876-8555-b8b403e0cbcb-kube-api-access-mvcs6\") pod \"controller-6968d8fdc4-6bvx9\" (UID: \"0b58bf4d-52bb-4876-8555-b8b403e0cbcb\") " pod="metallb-system/controller-6968d8fdc4-6bvx9" Feb 02 17:27:13 crc kubenswrapper[4858]: I0202 17:27:13.122587 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4tcf\" (UniqueName: \"kubernetes.io/projected/c72561ce-1db8-4883-97fe-488222b2f232-kube-api-access-q4tcf\") pod \"speaker-dk7fw\" (UID: \"c72561ce-1db8-4883-97fe-488222b2f232\") " pod="metallb-system/speaker-dk7fw" Feb 02 17:27:13 crc kubenswrapper[4858]: I0202 17:27:13.187074 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-6bvx9" Feb 02 17:27:13 crc kubenswrapper[4858]: I0202 17:27:13.507541 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1226b394-7ee5-4947-8d99-532106bb7baa-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-pgxzh\" (UID: \"1226b394-7ee5-4947-8d99-532106bb7baa\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-pgxzh" Feb 02 17:27:13 crc kubenswrapper[4858]: I0202 17:27:13.513550 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1226b394-7ee5-4947-8d99-532106bb7baa-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-pgxzh\" (UID: \"1226b394-7ee5-4947-8d99-532106bb7baa\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-pgxzh" Feb 02 17:27:13 crc kubenswrapper[4858]: I0202 17:27:13.584143 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-6bvx9"] Feb 02 17:27:13 crc kubenswrapper[4858]: W0202 17:27:13.589407 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b58bf4d_52bb_4876_8555_b8b403e0cbcb.slice/crio-0290a052b57e0dcf1db3e44a8c583d38881af16a331ff79f3966c62220576d6a WatchSource:0}: Error finding container 0290a052b57e0dcf1db3e44a8c583d38881af16a331ff79f3966c62220576d6a: Status 404 returned error can't find the container with id 0290a052b57e0dcf1db3e44a8c583d38881af16a331ff79f3966c62220576d6a Feb 02 17:27:13 crc kubenswrapper[4858]: I0202 17:27:13.609013 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c72561ce-1db8-4883-97fe-488222b2f232-memberlist\") pod \"speaker-dk7fw\" (UID: \"c72561ce-1db8-4883-97fe-488222b2f232\") " pod="metallb-system/speaker-dk7fw" Feb 02 17:27:13 crc kubenswrapper[4858]: E0202 17:27:13.609250 4858 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 02 17:27:13 crc kubenswrapper[4858]: E0202 17:27:13.609343 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c72561ce-1db8-4883-97fe-488222b2f232-memberlist podName:c72561ce-1db8-4883-97fe-488222b2f232 nodeName:}" failed. No retries permitted until 2026-02-02 17:27:14.609317563 +0000 UTC m=+735.761732868 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/c72561ce-1db8-4883-97fe-488222b2f232-memberlist") pod "speaker-dk7fw" (UID: "c72561ce-1db8-4883-97fe-488222b2f232") : secret "metallb-memberlist" not found Feb 02 17:27:13 crc kubenswrapper[4858]: I0202 17:27:13.713907 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-pgxzh" Feb 02 17:27:13 crc kubenswrapper[4858]: I0202 17:27:13.886924 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-svlq2" event={"ID":"aa24ded3-4a92-4c89-bade-68547bdca597","Type":"ContainerStarted","Data":"89108d72a85f6fd263fcf0f53b71f33c026dfbdaf243fcc860223647d4030ecc"} Feb 02 17:27:13 crc kubenswrapper[4858]: I0202 17:27:13.888730 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-6bvx9" event={"ID":"0b58bf4d-52bb-4876-8555-b8b403e0cbcb","Type":"ContainerStarted","Data":"ce8d16d8c99169f1df5ccaf66e3b433b916960c7e11ed2135cb7e91a51258deb"} Feb 02 17:27:13 crc kubenswrapper[4858]: I0202 17:27:13.888749 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-6bvx9" event={"ID":"0b58bf4d-52bb-4876-8555-b8b403e0cbcb","Type":"ContainerStarted","Data":"9fef0d8e3b7c3b59dbe063b27acb01c69a93b40798df0585454d8f60b7342765"} Feb 02 17:27:13 crc kubenswrapper[4858]: I0202 17:27:13.888760 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-6bvx9" event={"ID":"0b58bf4d-52bb-4876-8555-b8b403e0cbcb","Type":"ContainerStarted","Data":"0290a052b57e0dcf1db3e44a8c583d38881af16a331ff79f3966c62220576d6a"} Feb 02 17:27:13 crc kubenswrapper[4858]: I0202 17:27:13.888908 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-6bvx9" Feb 02 17:27:13 crc kubenswrapper[4858]: I0202 17:27:13.906933 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-6bvx9" podStartSLOduration=1.906911883 podStartE2EDuration="1.906911883s" podCreationTimestamp="2026-02-02 17:27:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:27:13.90234845 +0000 UTC m=+735.054763715" watchObservedRunningTime="2026-02-02 17:27:13.906911883 +0000 UTC m=+735.059327148" Feb 02 17:27:14 crc kubenswrapper[4858]: I0202 17:27:14.113480 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-pgxzh"] Feb 02 17:27:14 crc kubenswrapper[4858]: I0202 17:27:14.623814 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c72561ce-1db8-4883-97fe-488222b2f232-memberlist\") pod \"speaker-dk7fw\" (UID: \"c72561ce-1db8-4883-97fe-488222b2f232\") " pod="metallb-system/speaker-dk7fw" Feb 02 17:27:14 crc kubenswrapper[4858]: I0202 17:27:14.629413 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c72561ce-1db8-4883-97fe-488222b2f232-memberlist\") pod \"speaker-dk7fw\" (UID: \"c72561ce-1db8-4883-97fe-488222b2f232\") " pod="metallb-system/speaker-dk7fw" Feb 02 17:27:14 crc kubenswrapper[4858]: I0202 17:27:14.658296 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-dk7fw" Feb 02 17:27:14 crc kubenswrapper[4858]: I0202 17:27:14.897449 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-dk7fw" event={"ID":"c72561ce-1db8-4883-97fe-488222b2f232","Type":"ContainerStarted","Data":"1f1cdcb84da8733041d5285d9ceae2497dabada3c09ade4712cf38fa20c5ebe8"} Feb 02 17:27:14 crc kubenswrapper[4858]: I0202 17:27:14.899452 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-pgxzh" event={"ID":"1226b394-7ee5-4947-8d99-532106bb7baa","Type":"ContainerStarted","Data":"58bf2ad8e9bad08692e43e63881e748d6d1531613b4d86d50684c558cd25b694"} Feb 02 17:27:15 crc kubenswrapper[4858]: I0202 17:27:15.916749 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-dk7fw" event={"ID":"c72561ce-1db8-4883-97fe-488222b2f232","Type":"ContainerStarted","Data":"81abccd7000b4dd41e58c554eb9b932c20ad49119a1a2da3be815958ddb85e74"} Feb 02 17:27:15 crc kubenswrapper[4858]: I0202 17:27:15.917035 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-dk7fw" event={"ID":"c72561ce-1db8-4883-97fe-488222b2f232","Type":"ContainerStarted","Data":"c25829ea4a20d3da975a0582c1361d9722ed883866a256b020c0d6ad9e45e357"} Feb 02 17:27:15 crc kubenswrapper[4858]: I0202 17:27:15.917199 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-dk7fw" Feb 02 17:27:15 crc kubenswrapper[4858]: I0202 17:27:15.939431 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-dk7fw" podStartSLOduration=3.939414958 podStartE2EDuration="3.939414958s" podCreationTimestamp="2026-02-02 17:27:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:27:15.938538452 +0000 UTC m=+737.090953747" watchObservedRunningTime="2026-02-02 17:27:15.939414958 +0000 UTC m=+737.091830223" Feb 02 17:27:20 crc kubenswrapper[4858]: I0202 17:27:20.964491 4858 generic.go:334] "Generic (PLEG): container finished" podID="aa24ded3-4a92-4c89-bade-68547bdca597" containerID="7ae757e3b48ba747d94ac77107f776ca7b781ae3a4fc4775e5e70ba9a33569af" exitCode=0 Feb 02 17:27:20 crc kubenswrapper[4858]: I0202 17:27:20.964539 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-svlq2" event={"ID":"aa24ded3-4a92-4c89-bade-68547bdca597","Type":"ContainerDied","Data":"7ae757e3b48ba747d94ac77107f776ca7b781ae3a4fc4775e5e70ba9a33569af"} Feb 02 17:27:20 crc kubenswrapper[4858]: I0202 17:27:20.968219 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-pgxzh" event={"ID":"1226b394-7ee5-4947-8d99-532106bb7baa","Type":"ContainerStarted","Data":"d43a66a5deb23404da3ccaa2a1918e35a8fd3912e18395f2b3aaece8ad8b9050"} Feb 02 17:27:20 crc kubenswrapper[4858]: I0202 17:27:20.968408 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-pgxzh" Feb 02 17:27:21 crc kubenswrapper[4858]: I0202 17:27:21.017901 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-pgxzh" podStartSLOduration=2.851247787 podStartE2EDuration="9.017879237s" podCreationTimestamp="2026-02-02 17:27:12 +0000 UTC" firstStartedPulling="2026-02-02 17:27:14.123903512 +0000 UTC m=+735.276318777" lastFinishedPulling="2026-02-02 17:27:20.290534962 +0000 UTC m=+741.442950227" observedRunningTime="2026-02-02 17:27:21.009469091 +0000 UTC m=+742.161884376" watchObservedRunningTime="2026-02-02 17:27:21.017879237 +0000 UTC m=+742.170294512" Feb 02 17:27:21 crc kubenswrapper[4858]: I0202 17:27:21.980167 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-svlq2" event={"ID":"aa24ded3-4a92-4c89-bade-68547bdca597","Type":"ContainerDied","Data":"35c09b4a5083f2192ecec798b079cf3e92c7f44e21c967f0bf0ce06acc36fc13"} Feb 02 17:27:21 crc kubenswrapper[4858]: I0202 17:27:21.979959 4858 generic.go:334] "Generic (PLEG): container finished" podID="aa24ded3-4a92-4c89-bade-68547bdca597" containerID="35c09b4a5083f2192ecec798b079cf3e92c7f44e21c967f0bf0ce06acc36fc13" exitCode=0 Feb 02 17:27:22 crc kubenswrapper[4858]: I0202 17:27:22.990862 4858 generic.go:334] "Generic (PLEG): container finished" podID="aa24ded3-4a92-4c89-bade-68547bdca597" containerID="f8526c6b356356285c8bb498b1b8ff4e43b747abe5ed9fa70f6e2a57b8ee85fd" exitCode=0 Feb 02 17:27:22 crc kubenswrapper[4858]: I0202 17:27:22.991116 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-svlq2" event={"ID":"aa24ded3-4a92-4c89-bade-68547bdca597","Type":"ContainerDied","Data":"f8526c6b356356285c8bb498b1b8ff4e43b747abe5ed9fa70f6e2a57b8ee85fd"} Feb 02 17:27:23 crc kubenswrapper[4858]: I0202 17:27:23.190749 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-6bvx9" Feb 02 17:27:24 crc kubenswrapper[4858]: I0202 17:27:24.009924 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-svlq2" event={"ID":"aa24ded3-4a92-4c89-bade-68547bdca597","Type":"ContainerStarted","Data":"6435673e3aa08995c10982153cd6fd1af0397cb1b0c31c26222ee23dfc56e677"} Feb 02 17:27:24 crc kubenswrapper[4858]: I0202 17:27:24.010505 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-svlq2" event={"ID":"aa24ded3-4a92-4c89-bade-68547bdca597","Type":"ContainerStarted","Data":"b84a28e85e6cadd8a5d48dec963baf080c4318ac8f88555f42641a62124c79c2"} Feb 02 17:27:24 crc kubenswrapper[4858]: I0202 17:27:24.010521 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-svlq2" event={"ID":"aa24ded3-4a92-4c89-bade-68547bdca597","Type":"ContainerStarted","Data":"9bb8b638eebfd1c8284f416f4ab9afed3bd2456b062b64bb44383ebdd08d850c"} Feb 02 17:27:24 crc kubenswrapper[4858]: I0202 17:27:24.010532 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-svlq2" event={"ID":"aa24ded3-4a92-4c89-bade-68547bdca597","Type":"ContainerStarted","Data":"445919bc97bda935b239f9275ff7b17a601dc4a111a078fdcc6f1deed3cf5ec7"} Feb 02 17:27:24 crc kubenswrapper[4858]: I0202 17:27:24.010543 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-svlq2" event={"ID":"aa24ded3-4a92-4c89-bade-68547bdca597","Type":"ContainerStarted","Data":"4572b2dcf1a26c1a3ff0e124bed6bb10e7d3de2e8070094fd732ca1f4d556145"} Feb 02 17:27:24 crc kubenswrapper[4858]: I0202 17:27:24.679484 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-dk7fw" Feb 02 17:27:25 crc kubenswrapper[4858]: I0202 17:27:25.028859 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-svlq2" event={"ID":"aa24ded3-4a92-4c89-bade-68547bdca597","Type":"ContainerStarted","Data":"2a37bc97e59806e73e506e2f9524cffc00d385a08f4820e680d474b67470d5fe"} Feb 02 17:27:25 crc kubenswrapper[4858]: I0202 17:27:25.029361 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-svlq2" Feb 02 17:27:25 crc kubenswrapper[4858]: I0202 17:27:25.058513 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-svlq2" podStartSLOduration=6.020284141 podStartE2EDuration="13.058492123s" podCreationTimestamp="2026-02-02 17:27:12 +0000 UTC" firstStartedPulling="2026-02-02 17:27:13.251515397 +0000 UTC m=+734.403930662" lastFinishedPulling="2026-02-02 17:27:20.289723369 +0000 UTC m=+741.442138644" observedRunningTime="2026-02-02 17:27:25.054347542 +0000 UTC m=+746.206762837" watchObservedRunningTime="2026-02-02 17:27:25.058492123 +0000 UTC m=+746.210907398" Feb 02 17:27:27 crc kubenswrapper[4858]: I0202 17:27:27.347528 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-6cl4c"] Feb 02 17:27:27 crc kubenswrapper[4858]: I0202 17:27:27.349434 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6cl4c" Feb 02 17:27:27 crc kubenswrapper[4858]: I0202 17:27:27.356339 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 02 17:27:27 crc kubenswrapper[4858]: I0202 17:27:27.356353 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 02 17:27:27 crc kubenswrapper[4858]: I0202 17:27:27.356463 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-rfvpx" Feb 02 17:27:27 crc kubenswrapper[4858]: I0202 17:27:27.363834 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6cl4c"] Feb 02 17:27:27 crc kubenswrapper[4858]: I0202 17:27:27.510079 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x82lh\" (UniqueName: \"kubernetes.io/projected/b123c5f8-831f-41b2-a1d0-fcde62501499-kube-api-access-x82lh\") pod \"openstack-operator-index-6cl4c\" (UID: \"b123c5f8-831f-41b2-a1d0-fcde62501499\") " pod="openstack-operators/openstack-operator-index-6cl4c" Feb 02 17:27:27 crc kubenswrapper[4858]: I0202 17:27:27.612204 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x82lh\" (UniqueName: \"kubernetes.io/projected/b123c5f8-831f-41b2-a1d0-fcde62501499-kube-api-access-x82lh\") pod \"openstack-operator-index-6cl4c\" (UID: \"b123c5f8-831f-41b2-a1d0-fcde62501499\") " pod="openstack-operators/openstack-operator-index-6cl4c" Feb 02 17:27:27 crc kubenswrapper[4858]: I0202 17:27:27.633288 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x82lh\" (UniqueName: \"kubernetes.io/projected/b123c5f8-831f-41b2-a1d0-fcde62501499-kube-api-access-x82lh\") pod \"openstack-operator-index-6cl4c\" (UID: \"b123c5f8-831f-41b2-a1d0-fcde62501499\") " pod="openstack-operators/openstack-operator-index-6cl4c" Feb 02 17:27:27 crc kubenswrapper[4858]: I0202 17:27:27.683764 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6cl4c" Feb 02 17:27:28 crc kubenswrapper[4858]: I0202 17:27:28.098233 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-svlq2" Feb 02 17:27:28 crc kubenswrapper[4858]: I0202 17:27:28.159781 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-svlq2" Feb 02 17:27:28 crc kubenswrapper[4858]: I0202 17:27:28.162946 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6cl4c"] Feb 02 17:27:28 crc kubenswrapper[4858]: W0202 17:27:28.170851 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb123c5f8_831f_41b2_a1d0_fcde62501499.slice/crio-8a4f006ea940b6c9946951b423dc0eeb8f7091d846793e4bf6e1adb9fa164680 WatchSource:0}: Error finding container 8a4f006ea940b6c9946951b423dc0eeb8f7091d846793e4bf6e1adb9fa164680: Status 404 returned error can't find the container with id 8a4f006ea940b6c9946951b423dc0eeb8f7091d846793e4bf6e1adb9fa164680 Feb 02 17:27:29 crc kubenswrapper[4858]: I0202 17:27:29.060530 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6cl4c" event={"ID":"b123c5f8-831f-41b2-a1d0-fcde62501499","Type":"ContainerStarted","Data":"8a4f006ea940b6c9946951b423dc0eeb8f7091d846793e4bf6e1adb9fa164680"} Feb 02 17:27:31 crc kubenswrapper[4858]: I0202 17:27:31.079524 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6cl4c" event={"ID":"b123c5f8-831f-41b2-a1d0-fcde62501499","Type":"ContainerStarted","Data":"9df65bae2dbeeae3d054636485708260ecc5b6c978b9312922297a2a69ababd6"} Feb 02 17:27:31 crc kubenswrapper[4858]: I0202 17:27:31.104660 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-6cl4c" podStartSLOduration=1.611212918 podStartE2EDuration="4.104641527s" podCreationTimestamp="2026-02-02 17:27:27 +0000 UTC" firstStartedPulling="2026-02-02 17:27:28.174670765 +0000 UTC m=+749.327086060" lastFinishedPulling="2026-02-02 17:27:30.668099404 +0000 UTC m=+751.820514669" observedRunningTime="2026-02-02 17:27:31.101537016 +0000 UTC m=+752.253952291" watchObservedRunningTime="2026-02-02 17:27:31.104641527 +0000 UTC m=+752.257056802" Feb 02 17:27:33 crc kubenswrapper[4858]: I0202 17:27:33.102264 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-svlq2" Feb 02 17:27:33 crc kubenswrapper[4858]: I0202 17:27:33.601498 4858 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 17:27:33 crc kubenswrapper[4858]: I0202 17:27:33.723697 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-pgxzh" Feb 02 17:27:37 crc kubenswrapper[4858]: I0202 17:27:37.684360 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-6cl4c" Feb 02 17:27:37 crc kubenswrapper[4858]: I0202 17:27:37.687059 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-6cl4c" Feb 02 17:27:37 crc kubenswrapper[4858]: I0202 17:27:37.714219 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-6cl4c" Feb 02 17:27:38 crc kubenswrapper[4858]: I0202 17:27:38.166950 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-6cl4c" Feb 02 17:27:43 crc kubenswrapper[4858]: I0202 17:27:43.357686 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5"] Feb 02 17:27:43 crc kubenswrapper[4858]: I0202 17:27:43.360253 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5" Feb 02 17:27:43 crc kubenswrapper[4858]: I0202 17:27:43.362644 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-hwdrw" Feb 02 17:27:43 crc kubenswrapper[4858]: I0202 17:27:43.365027 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5"] Feb 02 17:27:43 crc kubenswrapper[4858]: I0202 17:27:43.449292 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4156898-70e8-4bdc-a254-49c0917d38dc-util\") pod \"15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5\" (UID: \"b4156898-70e8-4bdc-a254-49c0917d38dc\") " pod="openstack-operators/15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5" Feb 02 17:27:43 crc kubenswrapper[4858]: I0202 17:27:43.449356 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4156898-70e8-4bdc-a254-49c0917d38dc-bundle\") pod \"15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5\" (UID: \"b4156898-70e8-4bdc-a254-49c0917d38dc\") " pod="openstack-operators/15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5" Feb 02 17:27:43 crc kubenswrapper[4858]: I0202 17:27:43.449405 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q2sz\" (UniqueName: \"kubernetes.io/projected/b4156898-70e8-4bdc-a254-49c0917d38dc-kube-api-access-6q2sz\") pod \"15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5\" (UID: \"b4156898-70e8-4bdc-a254-49c0917d38dc\") " pod="openstack-operators/15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5" Feb 02 17:27:43 crc kubenswrapper[4858]: I0202 17:27:43.551450 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4156898-70e8-4bdc-a254-49c0917d38dc-bundle\") pod \"15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5\" (UID: \"b4156898-70e8-4bdc-a254-49c0917d38dc\") " pod="openstack-operators/15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5" Feb 02 17:27:43 crc kubenswrapper[4858]: I0202 17:27:43.551548 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q2sz\" (UniqueName: \"kubernetes.io/projected/b4156898-70e8-4bdc-a254-49c0917d38dc-kube-api-access-6q2sz\") pod \"15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5\" (UID: \"b4156898-70e8-4bdc-a254-49c0917d38dc\") " pod="openstack-operators/15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5" Feb 02 17:27:43 crc kubenswrapper[4858]: I0202 17:27:43.551613 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4156898-70e8-4bdc-a254-49c0917d38dc-util\") pod \"15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5\" (UID: \"b4156898-70e8-4bdc-a254-49c0917d38dc\") " pod="openstack-operators/15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5" Feb 02 17:27:43 crc kubenswrapper[4858]: I0202 17:27:43.551925 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4156898-70e8-4bdc-a254-49c0917d38dc-bundle\") pod \"15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5\" (UID: \"b4156898-70e8-4bdc-a254-49c0917d38dc\") " pod="openstack-operators/15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5" Feb 02 17:27:43 crc kubenswrapper[4858]: I0202 17:27:43.552046 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4156898-70e8-4bdc-a254-49c0917d38dc-util\") pod \"15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5\" (UID: \"b4156898-70e8-4bdc-a254-49c0917d38dc\") " pod="openstack-operators/15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5" Feb 02 17:27:43 crc kubenswrapper[4858]: I0202 17:27:43.585383 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q2sz\" (UniqueName: \"kubernetes.io/projected/b4156898-70e8-4bdc-a254-49c0917d38dc-kube-api-access-6q2sz\") pod \"15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5\" (UID: \"b4156898-70e8-4bdc-a254-49c0917d38dc\") " pod="openstack-operators/15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5" Feb 02 17:27:43 crc kubenswrapper[4858]: I0202 17:27:43.722877 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5" Feb 02 17:27:43 crc kubenswrapper[4858]: I0202 17:27:43.955617 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5"] Feb 02 17:27:43 crc kubenswrapper[4858]: W0202 17:27:43.964795 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4156898_70e8_4bdc_a254_49c0917d38dc.slice/crio-4fdf293256753ead6e88db6d425412db3bc45bb069195dd3cd3dca44c702a855 WatchSource:0}: Error finding container 4fdf293256753ead6e88db6d425412db3bc45bb069195dd3cd3dca44c702a855: Status 404 returned error can't find the container with id 4fdf293256753ead6e88db6d425412db3bc45bb069195dd3cd3dca44c702a855 Feb 02 17:27:44 crc kubenswrapper[4858]: I0202 17:27:44.180747 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5" event={"ID":"b4156898-70e8-4bdc-a254-49c0917d38dc","Type":"ContainerStarted","Data":"029c924a29eba3dbbbf0bbd58136f97c8ed98eafcf9cc03ed4711ea9752f5742"} Feb 02 17:27:44 crc kubenswrapper[4858]: I0202 17:27:44.181145 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5" event={"ID":"b4156898-70e8-4bdc-a254-49c0917d38dc","Type":"ContainerStarted","Data":"4fdf293256753ead6e88db6d425412db3bc45bb069195dd3cd3dca44c702a855"} Feb 02 17:27:45 crc kubenswrapper[4858]: I0202 17:27:45.192064 4858 generic.go:334] "Generic (PLEG): container finished" podID="b4156898-70e8-4bdc-a254-49c0917d38dc" containerID="029c924a29eba3dbbbf0bbd58136f97c8ed98eafcf9cc03ed4711ea9752f5742" exitCode=0 Feb 02 17:27:45 crc kubenswrapper[4858]: I0202 17:27:45.192149 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5" event={"ID":"b4156898-70e8-4bdc-a254-49c0917d38dc","Type":"ContainerDied","Data":"029c924a29eba3dbbbf0bbd58136f97c8ed98eafcf9cc03ed4711ea9752f5742"} Feb 02 17:27:46 crc kubenswrapper[4858]: I0202 17:27:46.204564 4858 generic.go:334] "Generic (PLEG): container finished" podID="b4156898-70e8-4bdc-a254-49c0917d38dc" containerID="7a0144ceaa0debf5174d19359b8f16be30581b96ce0de864ee5ac17c432a9762" exitCode=0 Feb 02 17:27:46 crc kubenswrapper[4858]: I0202 17:27:46.204639 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5" event={"ID":"b4156898-70e8-4bdc-a254-49c0917d38dc","Type":"ContainerDied","Data":"7a0144ceaa0debf5174d19359b8f16be30581b96ce0de864ee5ac17c432a9762"} Feb 02 17:27:47 crc kubenswrapper[4858]: I0202 17:27:47.218306 4858 generic.go:334] "Generic (PLEG): container finished" podID="b4156898-70e8-4bdc-a254-49c0917d38dc" containerID="544c10ab0492c375fd5e69b43ce4b4413dc95ff641a56be473c20725f9e22be9" exitCode=0 Feb 02 17:27:47 crc kubenswrapper[4858]: I0202 17:27:47.218548 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5" event={"ID":"b4156898-70e8-4bdc-a254-49c0917d38dc","Type":"ContainerDied","Data":"544c10ab0492c375fd5e69b43ce4b4413dc95ff641a56be473c20725f9e22be9"} Feb 02 17:27:48 crc kubenswrapper[4858]: I0202 17:27:48.491002 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5" Feb 02 17:27:48 crc kubenswrapper[4858]: I0202 17:27:48.688718 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4156898-70e8-4bdc-a254-49c0917d38dc-bundle\") pod \"b4156898-70e8-4bdc-a254-49c0917d38dc\" (UID: \"b4156898-70e8-4bdc-a254-49c0917d38dc\") " Feb 02 17:27:48 crc kubenswrapper[4858]: I0202 17:27:48.688836 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6q2sz\" (UniqueName: \"kubernetes.io/projected/b4156898-70e8-4bdc-a254-49c0917d38dc-kube-api-access-6q2sz\") pod \"b4156898-70e8-4bdc-a254-49c0917d38dc\" (UID: \"b4156898-70e8-4bdc-a254-49c0917d38dc\") " Feb 02 17:27:48 crc kubenswrapper[4858]: I0202 17:27:48.689041 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4156898-70e8-4bdc-a254-49c0917d38dc-util\") pod \"b4156898-70e8-4bdc-a254-49c0917d38dc\" (UID: \"b4156898-70e8-4bdc-a254-49c0917d38dc\") " Feb 02 17:27:48 crc kubenswrapper[4858]: I0202 17:27:48.690855 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4156898-70e8-4bdc-a254-49c0917d38dc-bundle" (OuterVolumeSpecName: "bundle") pod "b4156898-70e8-4bdc-a254-49c0917d38dc" (UID: "b4156898-70e8-4bdc-a254-49c0917d38dc"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:27:48 crc kubenswrapper[4858]: I0202 17:27:48.697131 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4156898-70e8-4bdc-a254-49c0917d38dc-kube-api-access-6q2sz" (OuterVolumeSpecName: "kube-api-access-6q2sz") pod "b4156898-70e8-4bdc-a254-49c0917d38dc" (UID: "b4156898-70e8-4bdc-a254-49c0917d38dc"). InnerVolumeSpecName "kube-api-access-6q2sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:27:48 crc kubenswrapper[4858]: I0202 17:27:48.711431 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4156898-70e8-4bdc-a254-49c0917d38dc-util" (OuterVolumeSpecName: "util") pod "b4156898-70e8-4bdc-a254-49c0917d38dc" (UID: "b4156898-70e8-4bdc-a254-49c0917d38dc"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:27:48 crc kubenswrapper[4858]: I0202 17:27:48.790916 4858 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4156898-70e8-4bdc-a254-49c0917d38dc-util\") on node \"crc\" DevicePath \"\"" Feb 02 17:27:48 crc kubenswrapper[4858]: I0202 17:27:48.790953 4858 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4156898-70e8-4bdc-a254-49c0917d38dc-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:27:48 crc kubenswrapper[4858]: I0202 17:27:48.790963 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6q2sz\" (UniqueName: \"kubernetes.io/projected/b4156898-70e8-4bdc-a254-49c0917d38dc-kube-api-access-6q2sz\") on node \"crc\" DevicePath \"\"" Feb 02 17:27:49 crc kubenswrapper[4858]: I0202 17:27:49.238261 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5" event={"ID":"b4156898-70e8-4bdc-a254-49c0917d38dc","Type":"ContainerDied","Data":"4fdf293256753ead6e88db6d425412db3bc45bb069195dd3cd3dca44c702a855"} Feb 02 17:27:49 crc kubenswrapper[4858]: I0202 17:27:49.238814 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fdf293256753ead6e88db6d425412db3bc45bb069195dd3cd3dca44c702a855" Feb 02 17:27:49 crc kubenswrapper[4858]: I0202 17:27:49.238332 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5" Feb 02 17:27:55 crc kubenswrapper[4858]: I0202 17:27:55.605057 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-7b85844457-9n8fp"] Feb 02 17:27:55 crc kubenswrapper[4858]: E0202 17:27:55.605815 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4156898-70e8-4bdc-a254-49c0917d38dc" containerName="pull" Feb 02 17:27:55 crc kubenswrapper[4858]: I0202 17:27:55.605828 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4156898-70e8-4bdc-a254-49c0917d38dc" containerName="pull" Feb 02 17:27:55 crc kubenswrapper[4858]: E0202 17:27:55.605839 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4156898-70e8-4bdc-a254-49c0917d38dc" containerName="util" Feb 02 17:27:55 crc kubenswrapper[4858]: I0202 17:27:55.605844 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4156898-70e8-4bdc-a254-49c0917d38dc" containerName="util" Feb 02 17:27:55 crc kubenswrapper[4858]: E0202 17:27:55.605854 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4156898-70e8-4bdc-a254-49c0917d38dc" containerName="extract" Feb 02 17:27:55 crc kubenswrapper[4858]: I0202 17:27:55.605860 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4156898-70e8-4bdc-a254-49c0917d38dc" containerName="extract" Feb 02 17:27:55 crc kubenswrapper[4858]: I0202 17:27:55.605987 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4156898-70e8-4bdc-a254-49c0917d38dc" containerName="extract" Feb 02 17:27:55 crc kubenswrapper[4858]: I0202 17:27:55.606371 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7b85844457-9n8fp" Feb 02 17:27:55 crc kubenswrapper[4858]: I0202 17:27:55.609030 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-jxsql" Feb 02 17:27:55 crc kubenswrapper[4858]: I0202 17:27:55.634605 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7b85844457-9n8fp"] Feb 02 17:27:55 crc kubenswrapper[4858]: I0202 17:27:55.696930 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vtj7\" (UniqueName: \"kubernetes.io/projected/332ff13e-699a-4582-873c-073c20cb6ca0-kube-api-access-5vtj7\") pod \"openstack-operator-controller-init-7b85844457-9n8fp\" (UID: \"332ff13e-699a-4582-873c-073c20cb6ca0\") " pod="openstack-operators/openstack-operator-controller-init-7b85844457-9n8fp" Feb 02 17:27:55 crc kubenswrapper[4858]: I0202 17:27:55.798598 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vtj7\" (UniqueName: \"kubernetes.io/projected/332ff13e-699a-4582-873c-073c20cb6ca0-kube-api-access-5vtj7\") pod \"openstack-operator-controller-init-7b85844457-9n8fp\" (UID: \"332ff13e-699a-4582-873c-073c20cb6ca0\") " pod="openstack-operators/openstack-operator-controller-init-7b85844457-9n8fp" Feb 02 17:27:55 crc kubenswrapper[4858]: I0202 17:27:55.829763 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vtj7\" (UniqueName: \"kubernetes.io/projected/332ff13e-699a-4582-873c-073c20cb6ca0-kube-api-access-5vtj7\") pod \"openstack-operator-controller-init-7b85844457-9n8fp\" (UID: \"332ff13e-699a-4582-873c-073c20cb6ca0\") " pod="openstack-operators/openstack-operator-controller-init-7b85844457-9n8fp" Feb 02 17:27:55 crc kubenswrapper[4858]: I0202 17:27:55.931393 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7b85844457-9n8fp" Feb 02 17:27:56 crc kubenswrapper[4858]: I0202 17:27:56.433127 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7b85844457-9n8fp"] Feb 02 17:27:57 crc kubenswrapper[4858]: I0202 17:27:57.319955 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7b85844457-9n8fp" event={"ID":"332ff13e-699a-4582-873c-073c20cb6ca0","Type":"ContainerStarted","Data":"6f37ac3ce06d9f4d6af6388bad4db470199c244f7f7d9583353fbfc74cf74a9d"} Feb 02 17:27:57 crc kubenswrapper[4858]: I0202 17:27:57.807675 4858 patch_prober.go:28] interesting pod/machine-config-daemon-lbvl2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 17:27:57 crc kubenswrapper[4858]: I0202 17:27:57.807737 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 17:28:00 crc kubenswrapper[4858]: I0202 17:28:00.350337 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7b85844457-9n8fp" event={"ID":"332ff13e-699a-4582-873c-073c20cb6ca0","Type":"ContainerStarted","Data":"2b57445518ef1302ed9da3e6dd5fe61a2c5af2c4f37d39d0937fc5d32b10f311"} Feb 02 17:28:00 crc kubenswrapper[4858]: I0202 17:28:00.350867 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7b85844457-9n8fp" Feb 02 17:28:00 crc kubenswrapper[4858]: I0202 17:28:00.379219 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-7b85844457-9n8fp" podStartSLOduration=1.883201656 podStartE2EDuration="5.379203297s" podCreationTimestamp="2026-02-02 17:27:55 +0000 UTC" firstStartedPulling="2026-02-02 17:27:56.441031919 +0000 UTC m=+777.593447194" lastFinishedPulling="2026-02-02 17:27:59.93703357 +0000 UTC m=+781.089448835" observedRunningTime="2026-02-02 17:28:00.375686624 +0000 UTC m=+781.528101889" watchObservedRunningTime="2026-02-02 17:28:00.379203297 +0000 UTC m=+781.531618562" Feb 02 17:28:05 crc kubenswrapper[4858]: I0202 17:28:05.935936 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7b85844457-9n8fp" Feb 02 17:28:23 crc kubenswrapper[4858]: I0202 17:28:23.805347 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-r786j"] Feb 02 17:28:23 crc kubenswrapper[4858]: I0202 17:28:23.806874 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-r786j" Feb 02 17:28:23 crc kubenswrapper[4858]: I0202 17:28:23.810094 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-58gqd" Feb 02 17:28:23 crc kubenswrapper[4858]: I0202 17:28:23.812930 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-rgpmv"] Feb 02 17:28:23 crc kubenswrapper[4858]: I0202 17:28:23.813870 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-rgpmv" Feb 02 17:28:23 crc kubenswrapper[4858]: I0202 17:28:23.820168 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-kcbss"] Feb 02 17:28:23 crc kubenswrapper[4858]: I0202 17:28:23.820539 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-xdwf6" Feb 02 17:28:23 crc kubenswrapper[4858]: I0202 17:28:23.821397 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-kcbss" Feb 02 17:28:23 crc kubenswrapper[4858]: I0202 17:28:23.823849 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-rpb4s" Feb 02 17:28:23 crc kubenswrapper[4858]: I0202 17:28:23.832007 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-r786j"] Feb 02 17:28:23 crc kubenswrapper[4858]: I0202 17:28:23.836232 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-rgpmv"] Feb 02 17:28:23 crc kubenswrapper[4858]: I0202 17:28:23.859553 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-p9qwv"] Feb 02 17:28:23 crc kubenswrapper[4858]: I0202 17:28:23.860502 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-p9qwv" Feb 02 17:28:23 crc kubenswrapper[4858]: I0202 17:28:23.863416 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-sv49v" Feb 02 17:28:23 crc kubenswrapper[4858]: I0202 17:28:23.875009 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-p9qwv"] Feb 02 17:28:23 crc kubenswrapper[4858]: I0202 17:28:23.880799 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-99rfw"] Feb 02 17:28:23 crc kubenswrapper[4858]: I0202 17:28:23.881681 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-99rfw" Feb 02 17:28:23 crc kubenswrapper[4858]: I0202 17:28:23.883163 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-lvkpm" Feb 02 17:28:23 crc kubenswrapper[4858]: I0202 17:28:23.890202 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-kcgxf"] Feb 02 17:28:23 crc kubenswrapper[4858]: I0202 17:28:23.891024 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-kcgxf" Feb 02 17:28:23 crc kubenswrapper[4858]: I0202 17:28:23.894725 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-k9l6n" Feb 02 17:28:23 crc kubenswrapper[4858]: I0202 17:28:23.900863 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-99rfw"] Feb 02 17:28:23 crc kubenswrapper[4858]: I0202 17:28:23.920485 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-kcbss"] Feb 02 17:28:23 crc kubenswrapper[4858]: I0202 17:28:23.933632 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghgcq\" (UniqueName: \"kubernetes.io/projected/f700cc0f-80eb-46a5-b7d3-b32dccdc2f49-kube-api-access-ghgcq\") pod \"designate-operator-controller-manager-6d9697b7f4-kcbss\" (UID: \"f700cc0f-80eb-46a5-b7d3-b32dccdc2f49\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-kcbss" Feb 02 17:28:23 crc kubenswrapper[4858]: I0202 17:28:23.933697 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5zjq\" (UniqueName: \"kubernetes.io/projected/a7c0be68-b4e3-47dc-b6c0-acd8878465ee-kube-api-access-x5zjq\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-r786j\" (UID: \"a7c0be68-b4e3-47dc-b6c0-acd8878465ee\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-r786j" Feb 02 17:28:23 crc kubenswrapper[4858]: I0202 17:28:23.933769 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz598\" (UniqueName: \"kubernetes.io/projected/e61e293a-bb2a-4ccd-bc20-815cc2bfb01b-kube-api-access-jz598\") pod \"cinder-operator-controller-manager-8d874c8fc-rgpmv\" (UID: \"e61e293a-bb2a-4ccd-bc20-815cc2bfb01b\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-rgpmv" Feb 02 17:28:23 crc kubenswrapper[4858]: I0202 17:28:23.934862 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-kcgxf"] Feb 02 17:28:23 crc kubenswrapper[4858]: I0202 17:28:23.962605 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-ck77w"] Feb 02 17:28:23 crc kubenswrapper[4858]: I0202 17:28:23.963623 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-ck77w" Feb 02 17:28:23 crc kubenswrapper[4858]: I0202 17:28:23.967685 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-crpsj" Feb 02 17:28:23 crc kubenswrapper[4858]: I0202 17:28:23.967734 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 02 17:28:23 crc kubenswrapper[4858]: I0202 17:28:23.971136 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-kffpf"] Feb 02 17:28:23 crc kubenswrapper[4858]: I0202 17:28:23.971947 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-kffpf" Feb 02 17:28:23 crc kubenswrapper[4858]: I0202 17:28:23.974013 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-gl5cz" Feb 02 17:28:23 crc kubenswrapper[4858]: I0202 17:28:23.981781 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-ck77w"] Feb 02 17:28:23 crc kubenswrapper[4858]: I0202 17:28:23.991176 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-kffpf"] Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.004900 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-rmkrp"] Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.005842 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-rmkrp" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.014257 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-74vn7" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.026075 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-7cq6h"] Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.027039 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-7cq6h" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.030275 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-6cchh" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.033703 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-2p2q6"] Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.034660 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-2p2q6" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.036293 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-8qpb8" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.036843 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbtxn\" (UniqueName: \"kubernetes.io/projected/8eca62a8-4909-4402-89ff-bd59ad42daef-kube-api-access-fbtxn\") pod \"horizon-operator-controller-manager-5fb775575f-kcgxf\" (UID: \"8eca62a8-4909-4402-89ff-bd59ad42daef\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-kcgxf" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.036883 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxgkr\" (UniqueName: \"kubernetes.io/projected/096752c5-391b-4370-b5f6-39ef63d6878e-kube-api-access-wxgkr\") pod \"manila-operator-controller-manager-7dd968899f-7cq6h\" (UID: \"096752c5-391b-4370-b5f6-39ef63d6878e\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-7cq6h" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.036919 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz598\" (UniqueName: \"kubernetes.io/projected/e61e293a-bb2a-4ccd-bc20-815cc2bfb01b-kube-api-access-jz598\") pod \"cinder-operator-controller-manager-8d874c8fc-rgpmv\" (UID: \"e61e293a-bb2a-4ccd-bc20-815cc2bfb01b\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-rgpmv" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.036960 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcjkp\" (UniqueName: \"kubernetes.io/projected/2600f62e-5615-4217-9629-9b77846634f9-kube-api-access-zcjkp\") pod \"mariadb-operator-controller-manager-67bf948998-2p2q6\" (UID: \"2600f62e-5615-4217-9629-9b77846634f9\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-2p2q6" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.037005 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsbdq\" (UniqueName: \"kubernetes.io/projected/8c70d2b3-c4e9-422f-ace6-f11450c068ec-kube-api-access-vsbdq\") pod \"keystone-operator-controller-manager-84f48565d4-rmkrp\" (UID: \"8c70d2b3-c4e9-422f-ace6-f11450c068ec\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-rmkrp" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.037034 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5b2eeae9-b158-4d59-8056-b12e1a397d18-cert\") pod \"infra-operator-controller-manager-79955696d6-ck77w\" (UID: \"5b2eeae9-b158-4d59-8056-b12e1a397d18\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ck77w" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.037058 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5ml7\" (UniqueName: \"kubernetes.io/projected/ad1072ec-d0e8-49ff-9971-8f6589bde802-kube-api-access-d5ml7\") pod \"glance-operator-controller-manager-8886f4c47-p9qwv\" (UID: \"ad1072ec-d0e8-49ff-9971-8f6589bde802\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-p9qwv" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.037093 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znqxd\" (UniqueName: \"kubernetes.io/projected/5b2eeae9-b158-4d59-8056-b12e1a397d18-kube-api-access-znqxd\") pod \"infra-operator-controller-manager-79955696d6-ck77w\" (UID: \"5b2eeae9-b158-4d59-8056-b12e1a397d18\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ck77w" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.037123 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghgcq\" (UniqueName: \"kubernetes.io/projected/f700cc0f-80eb-46a5-b7d3-b32dccdc2f49-kube-api-access-ghgcq\") pod \"designate-operator-controller-manager-6d9697b7f4-kcbss\" (UID: \"f700cc0f-80eb-46a5-b7d3-b32dccdc2f49\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-kcbss" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.037145 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dph52\" (UniqueName: \"kubernetes.io/projected/44678b87-d59f-4661-93c9-8e2ddb8ea61e-kube-api-access-dph52\") pod \"ironic-operator-controller-manager-5f4b8bd54d-kffpf\" (UID: \"44678b87-d59f-4661-93c9-8e2ddb8ea61e\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-kffpf" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.037183 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5zjq\" (UniqueName: \"kubernetes.io/projected/a7c0be68-b4e3-47dc-b6c0-acd8878465ee-kube-api-access-x5zjq\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-r786j\" (UID: \"a7c0be68-b4e3-47dc-b6c0-acd8878465ee\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-r786j" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.037219 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74tlm\" (UniqueName: \"kubernetes.io/projected/76ec111a-d121-411c-9d81-8fcfd6323d49-kube-api-access-74tlm\") pod \"heat-operator-controller-manager-69d6db494d-99rfw\" (UID: \"76ec111a-d121-411c-9d81-8fcfd6323d49\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-99rfw" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.049601 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-7cq6h"] Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.061476 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-rmkrp"] Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.086993 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz598\" (UniqueName: \"kubernetes.io/projected/e61e293a-bb2a-4ccd-bc20-815cc2bfb01b-kube-api-access-jz598\") pod \"cinder-operator-controller-manager-8d874c8fc-rgpmv\" (UID: \"e61e293a-bb2a-4ccd-bc20-815cc2bfb01b\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-rgpmv" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.097253 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-2p2q6"] Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.111833 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5zjq\" (UniqueName: \"kubernetes.io/projected/a7c0be68-b4e3-47dc-b6c0-acd8878465ee-kube-api-access-x5zjq\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-r786j\" (UID: \"a7c0be68-b4e3-47dc-b6c0-acd8878465ee\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-r786j" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.113472 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghgcq\" (UniqueName: \"kubernetes.io/projected/f700cc0f-80eb-46a5-b7d3-b32dccdc2f49-kube-api-access-ghgcq\") pod \"designate-operator-controller-manager-6d9697b7f4-kcbss\" (UID: \"f700cc0f-80eb-46a5-b7d3-b32dccdc2f49\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-kcbss" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.142046 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-vpbp7"] Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.144439 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-vpbp7" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.144928 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcjkp\" (UniqueName: \"kubernetes.io/projected/2600f62e-5615-4217-9629-9b77846634f9-kube-api-access-zcjkp\") pod \"mariadb-operator-controller-manager-67bf948998-2p2q6\" (UID: \"2600f62e-5615-4217-9629-9b77846634f9\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-2p2q6" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.145039 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsbdq\" (UniqueName: \"kubernetes.io/projected/8c70d2b3-c4e9-422f-ace6-f11450c068ec-kube-api-access-vsbdq\") pod \"keystone-operator-controller-manager-84f48565d4-rmkrp\" (UID: \"8c70d2b3-c4e9-422f-ace6-f11450c068ec\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-rmkrp" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.145127 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5b2eeae9-b158-4d59-8056-b12e1a397d18-cert\") pod \"infra-operator-controller-manager-79955696d6-ck77w\" (UID: \"5b2eeae9-b158-4d59-8056-b12e1a397d18\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ck77w" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.145152 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5ml7\" (UniqueName: \"kubernetes.io/projected/ad1072ec-d0e8-49ff-9971-8f6589bde802-kube-api-access-d5ml7\") pod \"glance-operator-controller-manager-8886f4c47-p9qwv\" (UID: \"ad1072ec-d0e8-49ff-9971-8f6589bde802\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-p9qwv" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.145218 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znqxd\" (UniqueName: \"kubernetes.io/projected/5b2eeae9-b158-4d59-8056-b12e1a397d18-kube-api-access-znqxd\") pod \"infra-operator-controller-manager-79955696d6-ck77w\" (UID: \"5b2eeae9-b158-4d59-8056-b12e1a397d18\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ck77w" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.145283 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dph52\" (UniqueName: \"kubernetes.io/projected/44678b87-d59f-4661-93c9-8e2ddb8ea61e-kube-api-access-dph52\") pod \"ironic-operator-controller-manager-5f4b8bd54d-kffpf\" (UID: \"44678b87-d59f-4661-93c9-8e2ddb8ea61e\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-kffpf" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.145340 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74tlm\" (UniqueName: \"kubernetes.io/projected/76ec111a-d121-411c-9d81-8fcfd6323d49-kube-api-access-74tlm\") pod \"heat-operator-controller-manager-69d6db494d-99rfw\" (UID: \"76ec111a-d121-411c-9d81-8fcfd6323d49\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-99rfw" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.145426 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbtxn\" (UniqueName: \"kubernetes.io/projected/8eca62a8-4909-4402-89ff-bd59ad42daef-kube-api-access-fbtxn\") pod \"horizon-operator-controller-manager-5fb775575f-kcgxf\" (UID: \"8eca62a8-4909-4402-89ff-bd59ad42daef\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-kcgxf" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.145513 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxgkr\" (UniqueName: \"kubernetes.io/projected/096752c5-391b-4370-b5f6-39ef63d6878e-kube-api-access-wxgkr\") pod \"manila-operator-controller-manager-7dd968899f-7cq6h\" (UID: \"096752c5-391b-4370-b5f6-39ef63d6878e\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-7cq6h" Feb 02 17:28:24 crc kubenswrapper[4858]: E0202 17:28:24.154207 4858 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 17:28:24 crc kubenswrapper[4858]: E0202 17:28:24.154509 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b2eeae9-b158-4d59-8056-b12e1a397d18-cert podName:5b2eeae9-b158-4d59-8056-b12e1a397d18 nodeName:}" failed. No retries permitted until 2026-02-02 17:28:24.654380953 +0000 UTC m=+805.806796228 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5b2eeae9-b158-4d59-8056-b12e1a397d18-cert") pod "infra-operator-controller-manager-79955696d6-ck77w" (UID: "5b2eeae9-b158-4d59-8056-b12e1a397d18") : secret "infra-operator-webhook-server-cert" not found Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.170755 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-r786j" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.180118 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-vpbp7"] Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.183158 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-rgpmv" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.185435 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-p2v9j" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.202066 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-kcbss" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.223483 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxgkr\" (UniqueName: \"kubernetes.io/projected/096752c5-391b-4370-b5f6-39ef63d6878e-kube-api-access-wxgkr\") pod \"manila-operator-controller-manager-7dd968899f-7cq6h\" (UID: \"096752c5-391b-4370-b5f6-39ef63d6878e\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-7cq6h" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.226991 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-2g5g6"] Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.227758 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-2g5g6" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.234959 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbtxn\" (UniqueName: \"kubernetes.io/projected/8eca62a8-4909-4402-89ff-bd59ad42daef-kube-api-access-fbtxn\") pod \"horizon-operator-controller-manager-5fb775575f-kcgxf\" (UID: \"8eca62a8-4909-4402-89ff-bd59ad42daef\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-kcgxf" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.240414 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-2ds2z" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.243745 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcjkp\" (UniqueName: \"kubernetes.io/projected/2600f62e-5615-4217-9629-9b77846634f9-kube-api-access-zcjkp\") pod \"mariadb-operator-controller-manager-67bf948998-2p2q6\" (UID: \"2600f62e-5615-4217-9629-9b77846634f9\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-2p2q6" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.244624 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5ml7\" (UniqueName: \"kubernetes.io/projected/ad1072ec-d0e8-49ff-9971-8f6589bde802-kube-api-access-d5ml7\") pod \"glance-operator-controller-manager-8886f4c47-p9qwv\" (UID: \"ad1072ec-d0e8-49ff-9971-8f6589bde802\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-p9qwv" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.245052 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74tlm\" (UniqueName: \"kubernetes.io/projected/76ec111a-d121-411c-9d81-8fcfd6323d49-kube-api-access-74tlm\") pod \"heat-operator-controller-manager-69d6db494d-99rfw\" (UID: \"76ec111a-d121-411c-9d81-8fcfd6323d49\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-99rfw" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.245412 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsbdq\" (UniqueName: \"kubernetes.io/projected/8c70d2b3-c4e9-422f-ace6-f11450c068ec-kube-api-access-vsbdq\") pod \"keystone-operator-controller-manager-84f48565d4-rmkrp\" (UID: \"8c70d2b3-c4e9-422f-ace6-f11450c068ec\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-rmkrp" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.246777 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dph52\" (UniqueName: \"kubernetes.io/projected/44678b87-d59f-4661-93c9-8e2ddb8ea61e-kube-api-access-dph52\") pod \"ironic-operator-controller-manager-5f4b8bd54d-kffpf\" (UID: \"44678b87-d59f-4661-93c9-8e2ddb8ea61e\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-kffpf" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.247487 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z47bw\" (UniqueName: \"kubernetes.io/projected/f5578b04-55cc-4bb9-a3f5-27e63ffe0c27-kube-api-access-z47bw\") pod \"neutron-operator-controller-manager-585dbc889-vpbp7\" (UID: \"f5578b04-55cc-4bb9-a3f5-27e63ffe0c27\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-vpbp7" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.254698 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znqxd\" (UniqueName: \"kubernetes.io/projected/5b2eeae9-b158-4d59-8056-b12e1a397d18-kube-api-access-znqxd\") pod \"infra-operator-controller-manager-79955696d6-ck77w\" (UID: \"5b2eeae9-b158-4d59-8056-b12e1a397d18\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ck77w" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.255041 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-kcgxf" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.317553 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-kffpf" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.343170 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-rmkrp" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.358070 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-tfvcz"] Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.359035 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-2g5g6"] Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.359126 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-tfvcz" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.360719 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z47bw\" (UniqueName: \"kubernetes.io/projected/f5578b04-55cc-4bb9-a3f5-27e63ffe0c27-kube-api-access-z47bw\") pod \"neutron-operator-controller-manager-585dbc889-vpbp7\" (UID: \"f5578b04-55cc-4bb9-a3f5-27e63ffe0c27\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-vpbp7" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.360794 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvhwp\" (UniqueName: \"kubernetes.io/projected/b6d0d2c9-a689-4bcf-b3c8-b8aa25e47898-kube-api-access-nvhwp\") pod \"nova-operator-controller-manager-55bff696bd-2g5g6\" (UID: \"b6d0d2c9-a689-4bcf-b3c8-b8aa25e47898\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-2g5g6" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.366605 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-p2kjr" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.372320 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-7cq6h" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.387957 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-2p2q6" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.389938 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z47bw\" (UniqueName: \"kubernetes.io/projected/f5578b04-55cc-4bb9-a3f5-27e63ffe0c27-kube-api-access-z47bw\") pod \"neutron-operator-controller-manager-585dbc889-vpbp7\" (UID: \"f5578b04-55cc-4bb9-a3f5-27e63ffe0c27\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-vpbp7" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.435567 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-tfvcz"] Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.438918 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-8r9sc"] Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.440054 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-8r9sc" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.442873 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-hvvg2" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.462060 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvhwp\" (UniqueName: \"kubernetes.io/projected/b6d0d2c9-a689-4bcf-b3c8-b8aa25e47898-kube-api-access-nvhwp\") pod \"nova-operator-controller-manager-55bff696bd-2g5g6\" (UID: \"b6d0d2c9-a689-4bcf-b3c8-b8aa25e47898\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-2g5g6" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.462160 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kw2s\" (UniqueName: \"kubernetes.io/projected/405115c4-bd24-4b05-b437-a8a27bc1f2b5-kube-api-access-5kw2s\") pod \"octavia-operator-controller-manager-6687f8d877-tfvcz\" (UID: \"405115c4-bd24-4b05-b437-a8a27bc1f2b5\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-tfvcz" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.500794 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-8r9sc"] Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.510391 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvhwp\" (UniqueName: \"kubernetes.io/projected/b6d0d2c9-a689-4bcf-b3c8-b8aa25e47898-kube-api-access-nvhwp\") pod \"nova-operator-controller-manager-55bff696bd-2g5g6\" (UID: \"b6d0d2c9-a689-4bcf-b3c8-b8aa25e47898\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-2g5g6" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.517097 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-zlfqp"] Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.517922 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-zlfqp" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.521193 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-qq8qt" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.521994 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-p9qwv" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.531089 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-zlfqp"] Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.533235 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-99rfw" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.560984 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dsvmkf"] Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.562140 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dsvmkf" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.563266 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht2ns\" (UniqueName: \"kubernetes.io/projected/80f2567c-89d7-4350-a7f2-acd472bc2f68-kube-api-access-ht2ns\") pod \"ovn-operator-controller-manager-788c46999f-8r9sc\" (UID: \"80f2567c-89d7-4350-a7f2-acd472bc2f68\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-8r9sc" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.563315 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kw2s\" (UniqueName: \"kubernetes.io/projected/405115c4-bd24-4b05-b437-a8a27bc1f2b5-kube-api-access-5kw2s\") pod \"octavia-operator-controller-manager-6687f8d877-tfvcz\" (UID: \"405115c4-bd24-4b05-b437-a8a27bc1f2b5\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-tfvcz" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.573415 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.573725 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-5ngdf" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.598634 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-vpbp7" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.604033 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-srbzf"] Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.605035 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-srbzf" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.608985 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kw2s\" (UniqueName: \"kubernetes.io/projected/405115c4-bd24-4b05-b437-a8a27bc1f2b5-kube-api-access-5kw2s\") pod \"octavia-operator-controller-manager-6687f8d877-tfvcz\" (UID: \"405115c4-bd24-4b05-b437-a8a27bc1f2b5\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-tfvcz" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.616363 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-xwdl4" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.617072 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-srbzf"] Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.624592 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dsvmkf"] Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.643104 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-4jd6l"] Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.643442 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-2g5g6" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.645007 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-4jd6l" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.647395 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-zb2sn" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.654364 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-4jd6l"] Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.664450 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/00e707da-7230-4214-82a0-e1b18aad70a8-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dsvmkf\" (UID: \"00e707da-7230-4214-82a0-e1b18aad70a8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dsvmkf" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.664525 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds54q\" (UniqueName: \"kubernetes.io/projected/00e707da-7230-4214-82a0-e1b18aad70a8-kube-api-access-ds54q\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dsvmkf\" (UID: \"00e707da-7230-4214-82a0-e1b18aad70a8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dsvmkf" Feb 02 17:28:24 crc kubenswrapper[4858]: E0202 17:28:24.665606 4858 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 17:28:24 crc kubenswrapper[4858]: E0202 17:28:24.665662 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b2eeae9-b158-4d59-8056-b12e1a397d18-cert podName:5b2eeae9-b158-4d59-8056-b12e1a397d18 nodeName:}" failed. No retries permitted until 2026-02-02 17:28:25.665645466 +0000 UTC m=+806.818060731 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5b2eeae9-b158-4d59-8056-b12e1a397d18-cert") pod "infra-operator-controller-manager-79955696d6-ck77w" (UID: "5b2eeae9-b158-4d59-8056-b12e1a397d18") : secret "infra-operator-webhook-server-cert" not found Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.666341 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5b2eeae9-b158-4d59-8056-b12e1a397d18-cert\") pod \"infra-operator-controller-manager-79955696d6-ck77w\" (UID: \"5b2eeae9-b158-4d59-8056-b12e1a397d18\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ck77w" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.666487 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht2ns\" (UniqueName: \"kubernetes.io/projected/80f2567c-89d7-4350-a7f2-acd472bc2f68-kube-api-access-ht2ns\") pod \"ovn-operator-controller-manager-788c46999f-8r9sc\" (UID: \"80f2567c-89d7-4350-a7f2-acd472bc2f68\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-8r9sc" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.666520 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh4rg\" (UniqueName: \"kubernetes.io/projected/4a72e4a0-8e70-4d04-85c8-15b68840632d-kube-api-access-fh4rg\") pod \"swift-operator-controller-manager-68fc8c869-zlfqp\" (UID: \"4a72e4a0-8e70-4d04-85c8-15b68840632d\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-zlfqp" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.689090 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-z7cp9"] Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.690033 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-z7cp9"] Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.690115 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-z7cp9" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.690409 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht2ns\" (UniqueName: \"kubernetes.io/projected/80f2567c-89d7-4350-a7f2-acd472bc2f68-kube-api-access-ht2ns\") pod \"ovn-operator-controller-manager-788c46999f-8r9sc\" (UID: \"80f2567c-89d7-4350-a7f2-acd472bc2f68\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-8r9sc" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.692608 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-bmkqw" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.697732 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-rl5z2"] Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.698547 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-rl5z2" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.704945 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-tfvcz" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.705066 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-rl5z2"] Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.705779 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-sq9vq" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.720814 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-86df59f79f-rczsp"] Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.727562 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-86df59f79f-rczsp" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.730683 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.731654 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-86df59f79f-rczsp"] Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.734727 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.734956 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-knvjl" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.769432 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52f6w\" (UniqueName: \"kubernetes.io/projected/16a9ca97-2b15-4a52-8d2c-eb170a3f2b75-kube-api-access-52f6w\") pod \"placement-operator-controller-manager-5b964cf4cd-srbzf\" (UID: \"16a9ca97-2b15-4a52-8d2c-eb170a3f2b75\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-srbzf" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.769520 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/00e707da-7230-4214-82a0-e1b18aad70a8-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dsvmkf\" (UID: \"00e707da-7230-4214-82a0-e1b18aad70a8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dsvmkf" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.769581 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngb6m\" (UniqueName: \"kubernetes.io/projected/366ee9f4-9c6e-416a-8603-f6bac0530a6a-kube-api-access-ngb6m\") pod \"telemetry-operator-controller-manager-64b5b76f97-4jd6l\" (UID: \"366ee9f4-9c6e-416a-8603-f6bac0530a6a\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-4jd6l" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.769613 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds54q\" (UniqueName: \"kubernetes.io/projected/00e707da-7230-4214-82a0-e1b18aad70a8-kube-api-access-ds54q\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dsvmkf\" (UID: \"00e707da-7230-4214-82a0-e1b18aad70a8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dsvmkf" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.769669 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fh4rg\" (UniqueName: \"kubernetes.io/projected/4a72e4a0-8e70-4d04-85c8-15b68840632d-kube-api-access-fh4rg\") pod \"swift-operator-controller-manager-68fc8c869-zlfqp\" (UID: \"4a72e4a0-8e70-4d04-85c8-15b68840632d\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-zlfqp" Feb 02 17:28:24 crc kubenswrapper[4858]: E0202 17:28:24.770380 4858 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 17:28:24 crc kubenswrapper[4858]: E0202 17:28:24.770422 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00e707da-7230-4214-82a0-e1b18aad70a8-cert podName:00e707da-7230-4214-82a0-e1b18aad70a8 nodeName:}" failed. No retries permitted until 2026-02-02 17:28:25.270409231 +0000 UTC m=+806.422824496 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/00e707da-7230-4214-82a0-e1b18aad70a8-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dsvmkf" (UID: "00e707da-7230-4214-82a0-e1b18aad70a8") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.777598 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8lqf9"] Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.779602 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8lqf9" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.781743 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-8r9sc" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.784025 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-kp62f" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.792076 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh4rg\" (UniqueName: \"kubernetes.io/projected/4a72e4a0-8e70-4d04-85c8-15b68840632d-kube-api-access-fh4rg\") pod \"swift-operator-controller-manager-68fc8c869-zlfqp\" (UID: \"4a72e4a0-8e70-4d04-85c8-15b68840632d\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-zlfqp" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.798850 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds54q\" (UniqueName: \"kubernetes.io/projected/00e707da-7230-4214-82a0-e1b18aad70a8-kube-api-access-ds54q\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dsvmkf\" (UID: \"00e707da-7230-4214-82a0-e1b18aad70a8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dsvmkf" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.825000 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8lqf9"] Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.872019 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjxdj\" (UniqueName: \"kubernetes.io/projected/ad13cd52-7254-489a-8960-511bbc2a3360-kube-api-access-xjxdj\") pod \"openstack-operator-controller-manager-86df59f79f-rczsp\" (UID: \"ad13cd52-7254-489a-8960-511bbc2a3360\") " pod="openstack-operators/openstack-operator-controller-manager-86df59f79f-rczsp" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.873032 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5f64\" (UniqueName: \"kubernetes.io/projected/da72a0b3-6998-4d0e-b7d3-f4fce5f11f1b-kube-api-access-f5f64\") pod \"test-operator-controller-manager-56f8bfcd9f-z7cp9\" (UID: \"da72a0b3-6998-4d0e-b7d3-f4fce5f11f1b\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-z7cp9" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.873101 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad13cd52-7254-489a-8960-511bbc2a3360-metrics-certs\") pod \"openstack-operator-controller-manager-86df59f79f-rczsp\" (UID: \"ad13cd52-7254-489a-8960-511bbc2a3360\") " pod="openstack-operators/openstack-operator-controller-manager-86df59f79f-rczsp" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.873132 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ad13cd52-7254-489a-8960-511bbc2a3360-webhook-certs\") pod \"openstack-operator-controller-manager-86df59f79f-rczsp\" (UID: \"ad13cd52-7254-489a-8960-511bbc2a3360\") " pod="openstack-operators/openstack-operator-controller-manager-86df59f79f-rczsp" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.873175 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52f6w\" (UniqueName: \"kubernetes.io/projected/16a9ca97-2b15-4a52-8d2c-eb170a3f2b75-kube-api-access-52f6w\") pod \"placement-operator-controller-manager-5b964cf4cd-srbzf\" (UID: \"16a9ca97-2b15-4a52-8d2c-eb170a3f2b75\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-srbzf" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.873238 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs6jr\" (UniqueName: \"kubernetes.io/projected/467af09f-e1d2-407e-989e-606a3a3219b0-kube-api-access-fs6jr\") pod \"watcher-operator-controller-manager-564965969-rl5z2\" (UID: \"467af09f-e1d2-407e-989e-606a3a3219b0\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-rl5z2" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.873261 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngb6m\" (UniqueName: \"kubernetes.io/projected/366ee9f4-9c6e-416a-8603-f6bac0530a6a-kube-api-access-ngb6m\") pod \"telemetry-operator-controller-manager-64b5b76f97-4jd6l\" (UID: \"366ee9f4-9c6e-416a-8603-f6bac0530a6a\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-4jd6l" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.877385 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-zlfqp" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.904764 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngb6m\" (UniqueName: \"kubernetes.io/projected/366ee9f4-9c6e-416a-8603-f6bac0530a6a-kube-api-access-ngb6m\") pod \"telemetry-operator-controller-manager-64b5b76f97-4jd6l\" (UID: \"366ee9f4-9c6e-416a-8603-f6bac0530a6a\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-4jd6l" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.905344 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52f6w\" (UniqueName: \"kubernetes.io/projected/16a9ca97-2b15-4a52-8d2c-eb170a3f2b75-kube-api-access-52f6w\") pod \"placement-operator-controller-manager-5b964cf4cd-srbzf\" (UID: \"16a9ca97-2b15-4a52-8d2c-eb170a3f2b75\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-srbzf" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.951133 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-srbzf" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.974725 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs6jr\" (UniqueName: \"kubernetes.io/projected/467af09f-e1d2-407e-989e-606a3a3219b0-kube-api-access-fs6jr\") pod \"watcher-operator-controller-manager-564965969-rl5z2\" (UID: \"467af09f-e1d2-407e-989e-606a3a3219b0\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-rl5z2" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.974793 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjxdj\" (UniqueName: \"kubernetes.io/projected/ad13cd52-7254-489a-8960-511bbc2a3360-kube-api-access-xjxdj\") pod \"openstack-operator-controller-manager-86df59f79f-rczsp\" (UID: \"ad13cd52-7254-489a-8960-511bbc2a3360\") " pod="openstack-operators/openstack-operator-controller-manager-86df59f79f-rczsp" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.974847 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5f64\" (UniqueName: \"kubernetes.io/projected/da72a0b3-6998-4d0e-b7d3-f4fce5f11f1b-kube-api-access-f5f64\") pod \"test-operator-controller-manager-56f8bfcd9f-z7cp9\" (UID: \"da72a0b3-6998-4d0e-b7d3-f4fce5f11f1b\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-z7cp9" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.974883 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g84sd\" (UniqueName: \"kubernetes.io/projected/3733a396-b067-4153-891a-1c5b044a7e04-kube-api-access-g84sd\") pod \"rabbitmq-cluster-operator-manager-668c99d594-8lqf9\" (UID: \"3733a396-b067-4153-891a-1c5b044a7e04\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8lqf9" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.974938 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad13cd52-7254-489a-8960-511bbc2a3360-metrics-certs\") pod \"openstack-operator-controller-manager-86df59f79f-rczsp\" (UID: \"ad13cd52-7254-489a-8960-511bbc2a3360\") " pod="openstack-operators/openstack-operator-controller-manager-86df59f79f-rczsp" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.974972 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ad13cd52-7254-489a-8960-511bbc2a3360-webhook-certs\") pod \"openstack-operator-controller-manager-86df59f79f-rczsp\" (UID: \"ad13cd52-7254-489a-8960-511bbc2a3360\") " pod="openstack-operators/openstack-operator-controller-manager-86df59f79f-rczsp" Feb 02 17:28:24 crc kubenswrapper[4858]: E0202 17:28:24.975157 4858 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 17:28:24 crc kubenswrapper[4858]: E0202 17:28:24.975216 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad13cd52-7254-489a-8960-511bbc2a3360-webhook-certs podName:ad13cd52-7254-489a-8960-511bbc2a3360 nodeName:}" failed. No retries permitted until 2026-02-02 17:28:25.475196095 +0000 UTC m=+806.627611360 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ad13cd52-7254-489a-8960-511bbc2a3360-webhook-certs") pod "openstack-operator-controller-manager-86df59f79f-rczsp" (UID: "ad13cd52-7254-489a-8960-511bbc2a3360") : secret "webhook-server-cert" not found Feb 02 17:28:24 crc kubenswrapper[4858]: E0202 17:28:24.975459 4858 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 17:28:24 crc kubenswrapper[4858]: E0202 17:28:24.975533 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad13cd52-7254-489a-8960-511bbc2a3360-metrics-certs podName:ad13cd52-7254-489a-8960-511bbc2a3360 nodeName:}" failed. No retries permitted until 2026-02-02 17:28:25.475518184 +0000 UTC m=+806.627933449 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad13cd52-7254-489a-8960-511bbc2a3360-metrics-certs") pod "openstack-operator-controller-manager-86df59f79f-rczsp" (UID: "ad13cd52-7254-489a-8960-511bbc2a3360") : secret "metrics-server-cert" not found Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.980242 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-4jd6l" Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.981190 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-kcgxf"] Feb 02 17:28:24 crc kubenswrapper[4858]: I0202 17:28:24.998383 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjxdj\" (UniqueName: \"kubernetes.io/projected/ad13cd52-7254-489a-8960-511bbc2a3360-kube-api-access-xjxdj\") pod \"openstack-operator-controller-manager-86df59f79f-rczsp\" (UID: \"ad13cd52-7254-489a-8960-511bbc2a3360\") " pod="openstack-operators/openstack-operator-controller-manager-86df59f79f-rczsp" Feb 02 17:28:25 crc kubenswrapper[4858]: I0202 17:28:25.000237 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-r786j"] Feb 02 17:28:25 crc kubenswrapper[4858]: W0202 17:28:25.000859 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8eca62a8_4909_4402_89ff_bd59ad42daef.slice/crio-464c006bfbbd382390783fa40c4727bb409ca1a9b980dd845384e076bba60d16 WatchSource:0}: Error finding container 464c006bfbbd382390783fa40c4727bb409ca1a9b980dd845384e076bba60d16: Status 404 returned error can't find the container with id 464c006bfbbd382390783fa40c4727bb409ca1a9b980dd845384e076bba60d16 Feb 02 17:28:25 crc kubenswrapper[4858]: I0202 17:28:25.005868 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5f64\" (UniqueName: \"kubernetes.io/projected/da72a0b3-6998-4d0e-b7d3-f4fce5f11f1b-kube-api-access-f5f64\") pod \"test-operator-controller-manager-56f8bfcd9f-z7cp9\" (UID: \"da72a0b3-6998-4d0e-b7d3-f4fce5f11f1b\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-z7cp9" Feb 02 17:28:25 crc kubenswrapper[4858]: I0202 17:28:25.005898 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs6jr\" (UniqueName: \"kubernetes.io/projected/467af09f-e1d2-407e-989e-606a3a3219b0-kube-api-access-fs6jr\") pod \"watcher-operator-controller-manager-564965969-rl5z2\" (UID: \"467af09f-e1d2-407e-989e-606a3a3219b0\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-rl5z2" Feb 02 17:28:25 crc kubenswrapper[4858]: I0202 17:28:25.011816 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-kcbss"] Feb 02 17:28:25 crc kubenswrapper[4858]: I0202 17:28:25.025755 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-z7cp9" Feb 02 17:28:25 crc kubenswrapper[4858]: I0202 17:28:25.032038 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-rl5z2" Feb 02 17:28:25 crc kubenswrapper[4858]: I0202 17:28:25.075917 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g84sd\" (UniqueName: \"kubernetes.io/projected/3733a396-b067-4153-891a-1c5b044a7e04-kube-api-access-g84sd\") pod \"rabbitmq-cluster-operator-manager-668c99d594-8lqf9\" (UID: \"3733a396-b067-4153-891a-1c5b044a7e04\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8lqf9" Feb 02 17:28:25 crc kubenswrapper[4858]: I0202 17:28:25.094763 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g84sd\" (UniqueName: \"kubernetes.io/projected/3733a396-b067-4153-891a-1c5b044a7e04-kube-api-access-g84sd\") pod \"rabbitmq-cluster-operator-manager-668c99d594-8lqf9\" (UID: \"3733a396-b067-4153-891a-1c5b044a7e04\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8lqf9" Feb 02 17:28:25 crc kubenswrapper[4858]: I0202 17:28:25.146389 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8lqf9" Feb 02 17:28:25 crc kubenswrapper[4858]: I0202 17:28:25.195071 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-rgpmv"] Feb 02 17:28:25 crc kubenswrapper[4858]: W0202 17:28:25.218819 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode61e293a_bb2a_4ccd_bc20_815cc2bfb01b.slice/crio-825e21c3cfba56f39a51b4fd312d522b5873a57fb840a8c9c312fc3de6bd02fa WatchSource:0}: Error finding container 825e21c3cfba56f39a51b4fd312d522b5873a57fb840a8c9c312fc3de6bd02fa: Status 404 returned error can't find the container with id 825e21c3cfba56f39a51b4fd312d522b5873a57fb840a8c9c312fc3de6bd02fa Feb 02 17:28:25 crc kubenswrapper[4858]: I0202 17:28:25.287596 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/00e707da-7230-4214-82a0-e1b18aad70a8-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dsvmkf\" (UID: \"00e707da-7230-4214-82a0-e1b18aad70a8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dsvmkf" Feb 02 17:28:25 crc kubenswrapper[4858]: E0202 17:28:25.287903 4858 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 17:28:25 crc kubenswrapper[4858]: E0202 17:28:25.287949 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00e707da-7230-4214-82a0-e1b18aad70a8-cert podName:00e707da-7230-4214-82a0-e1b18aad70a8 nodeName:}" failed. No retries permitted until 2026-02-02 17:28:26.287934716 +0000 UTC m=+807.440349981 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/00e707da-7230-4214-82a0-e1b18aad70a8-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dsvmkf" (UID: "00e707da-7230-4214-82a0-e1b18aad70a8") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 17:28:25 crc kubenswrapper[4858]: I0202 17:28:25.352748 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-p9qwv"] Feb 02 17:28:25 crc kubenswrapper[4858]: I0202 17:28:25.361685 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-2g5g6"] Feb 02 17:28:25 crc kubenswrapper[4858]: I0202 17:28:25.371601 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-7cq6h"] Feb 02 17:28:25 crc kubenswrapper[4858]: W0202 17:28:25.380490 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod096752c5_391b_4370_b5f6_39ef63d6878e.slice/crio-08849e5f6c93b63f32e2eddca6c137478254f66acd5ea95dabb8673f15bdc901 WatchSource:0}: Error finding container 08849e5f6c93b63f32e2eddca6c137478254f66acd5ea95dabb8673f15bdc901: Status 404 returned error can't find the container with id 08849e5f6c93b63f32e2eddca6c137478254f66acd5ea95dabb8673f15bdc901 Feb 02 17:28:25 crc kubenswrapper[4858]: I0202 17:28:25.490138 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad13cd52-7254-489a-8960-511bbc2a3360-metrics-certs\") pod \"openstack-operator-controller-manager-86df59f79f-rczsp\" (UID: \"ad13cd52-7254-489a-8960-511bbc2a3360\") " pod="openstack-operators/openstack-operator-controller-manager-86df59f79f-rczsp" Feb 02 17:28:25 crc kubenswrapper[4858]: I0202 17:28:25.490193 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ad13cd52-7254-489a-8960-511bbc2a3360-webhook-certs\") pod \"openstack-operator-controller-manager-86df59f79f-rczsp\" (UID: \"ad13cd52-7254-489a-8960-511bbc2a3360\") " pod="openstack-operators/openstack-operator-controller-manager-86df59f79f-rczsp" Feb 02 17:28:25 crc kubenswrapper[4858]: E0202 17:28:25.490345 4858 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 17:28:25 crc kubenswrapper[4858]: E0202 17:28:25.490378 4858 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 17:28:25 crc kubenswrapper[4858]: E0202 17:28:25.490418 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad13cd52-7254-489a-8960-511bbc2a3360-metrics-certs podName:ad13cd52-7254-489a-8960-511bbc2a3360 nodeName:}" failed. No retries permitted until 2026-02-02 17:28:26.490396371 +0000 UTC m=+807.642811716 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad13cd52-7254-489a-8960-511bbc2a3360-metrics-certs") pod "openstack-operator-controller-manager-86df59f79f-rczsp" (UID: "ad13cd52-7254-489a-8960-511bbc2a3360") : secret "metrics-server-cert" not found Feb 02 17:28:25 crc kubenswrapper[4858]: E0202 17:28:25.490438 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad13cd52-7254-489a-8960-511bbc2a3360-webhook-certs podName:ad13cd52-7254-489a-8960-511bbc2a3360 nodeName:}" failed. No retries permitted until 2026-02-02 17:28:26.490431252 +0000 UTC m=+807.642846517 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ad13cd52-7254-489a-8960-511bbc2a3360-webhook-certs") pod "openstack-operator-controller-manager-86df59f79f-rczsp" (UID: "ad13cd52-7254-489a-8960-511bbc2a3360") : secret "webhook-server-cert" not found Feb 02 17:28:25 crc kubenswrapper[4858]: I0202 17:28:25.547203 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-vpbp7"] Feb 02 17:28:25 crc kubenswrapper[4858]: I0202 17:28:25.558688 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-rmkrp"] Feb 02 17:28:25 crc kubenswrapper[4858]: I0202 17:28:25.574761 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-kffpf"] Feb 02 17:28:25 crc kubenswrapper[4858]: I0202 17:28:25.574829 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-rgpmv" event={"ID":"e61e293a-bb2a-4ccd-bc20-815cc2bfb01b","Type":"ContainerStarted","Data":"825e21c3cfba56f39a51b4fd312d522b5873a57fb840a8c9c312fc3de6bd02fa"} Feb 02 17:28:25 crc kubenswrapper[4858]: I0202 17:28:25.580532 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-p9qwv" event={"ID":"ad1072ec-d0e8-49ff-9971-8f6589bde802","Type":"ContainerStarted","Data":"33b2a263017ceb341cfb50362799dc1e48e979e0311f4a61de35e31ce8105612"} Feb 02 17:28:25 crc kubenswrapper[4858]: W0202 17:28:25.580643 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5578b04_55cc_4bb9_a3f5_27e63ffe0c27.slice/crio-70c068643794a42dc419d72c07ae276f44793eb33b572f064c8e2c6898eaf599 WatchSource:0}: Error finding container 70c068643794a42dc419d72c07ae276f44793eb33b572f064c8e2c6898eaf599: Status 404 returned error can't find the container with id 70c068643794a42dc419d72c07ae276f44793eb33b572f064c8e2c6898eaf599 Feb 02 17:28:25 crc kubenswrapper[4858]: I0202 17:28:25.584775 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-2p2q6"] Feb 02 17:28:25 crc kubenswrapper[4858]: I0202 17:28:25.592060 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-99rfw"] Feb 02 17:28:25 crc kubenswrapper[4858]: I0202 17:28:25.592374 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-kcbss" event={"ID":"f700cc0f-80eb-46a5-b7d3-b32dccdc2f49","Type":"ContainerStarted","Data":"c44ffd6c0cd50bd62587ab4d07bafa98b898c1819e38b654113b935178948c9b"} Feb 02 17:28:25 crc kubenswrapper[4858]: I0202 17:28:25.596095 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-zlfqp"] Feb 02 17:28:25 crc kubenswrapper[4858]: I0202 17:28:25.596349 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-kcgxf" event={"ID":"8eca62a8-4909-4402-89ff-bd59ad42daef","Type":"ContainerStarted","Data":"464c006bfbbd382390783fa40c4727bb409ca1a9b980dd845384e076bba60d16"} Feb 02 17:28:25 crc kubenswrapper[4858]: W0202 17:28:25.597234 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76ec111a_d121_411c_9d81_8fcfd6323d49.slice/crio-19780077c34de61ee8d455fe0efdc0a57e3cf15be443fa2748b971927a10625c WatchSource:0}: Error finding container 19780077c34de61ee8d455fe0efdc0a57e3cf15be443fa2748b971927a10625c: Status 404 returned error can't find the container with id 19780077c34de61ee8d455fe0efdc0a57e3cf15be443fa2748b971927a10625c Feb 02 17:28:25 crc kubenswrapper[4858]: I0202 17:28:25.600649 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-tfvcz"] Feb 02 17:28:25 crc kubenswrapper[4858]: W0202 17:28:25.604821 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod405115c4_bd24_4b05_b437_a8a27bc1f2b5.slice/crio-f662deb43b2fb5d2e97fac913990d07f62362156c623280831bd38a4cc55dd48 WatchSource:0}: Error finding container f662deb43b2fb5d2e97fac913990d07f62362156c623280831bd38a4cc55dd48: Status 404 returned error can't find the container with id f662deb43b2fb5d2e97fac913990d07f62362156c623280831bd38a4cc55dd48 Feb 02 17:28:25 crc kubenswrapper[4858]: I0202 17:28:25.606835 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-7cq6h" event={"ID":"096752c5-391b-4370-b5f6-39ef63d6878e","Type":"ContainerStarted","Data":"08849e5f6c93b63f32e2eddca6c137478254f66acd5ea95dabb8673f15bdc901"} Feb 02 17:28:25 crc kubenswrapper[4858]: W0202 17:28:25.607016 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod366ee9f4_9c6e_416a_8603_f6bac0530a6a.slice/crio-f558ef44d569fe7bb721b2784070c4f9b1c160e2c42ebe72a8780b4b607ac59a WatchSource:0}: Error finding container f558ef44d569fe7bb721b2784070c4f9b1c160e2c42ebe72a8780b4b607ac59a: Status 404 returned error can't find the container with id f558ef44d569fe7bb721b2784070c4f9b1c160e2c42ebe72a8780b4b607ac59a Feb 02 17:28:25 crc kubenswrapper[4858]: E0202 17:28:25.608407 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-74tlm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-69d6db494d-99rfw_openstack-operators(76ec111a-d121-411c-9d81-8fcfd6323d49): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 17:28:25 crc kubenswrapper[4858]: I0202 17:28:25.608833 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-4jd6l"] Feb 02 17:28:25 crc kubenswrapper[4858]: E0202 17:28:25.610154 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-99rfw" podUID="76ec111a-d121-411c-9d81-8fcfd6323d49" Feb 02 17:28:25 crc kubenswrapper[4858]: E0202 17:28:25.610339 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ngb6m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-64b5b76f97-4jd6l_openstack-operators(366ee9f4-9c6e-416a-8603-f6bac0530a6a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 17:28:25 crc kubenswrapper[4858]: I0202 17:28:25.611419 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-r786j" event={"ID":"a7c0be68-b4e3-47dc-b6c0-acd8878465ee","Type":"ContainerStarted","Data":"61b1cdd1e78bec995fc0adf8e415d166cb4c346100bc7c0cd72afba65ada0407"} Feb 02 17:28:25 crc kubenswrapper[4858]: E0202 17:28:25.611541 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-4jd6l" podUID="366ee9f4-9c6e-416a-8603-f6bac0530a6a" Feb 02 17:28:25 crc kubenswrapper[4858]: I0202 17:28:25.612549 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-srbzf"] Feb 02 17:28:25 crc kubenswrapper[4858]: I0202 17:28:25.623566 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-2g5g6" event={"ID":"b6d0d2c9-a689-4bcf-b3c8-b8aa25e47898","Type":"ContainerStarted","Data":"ebb0b9d9c92539ebd406aa576295aacf960a6b96d7167c921a5a9d96e405827b"} Feb 02 17:28:25 crc kubenswrapper[4858]: E0202 17:28:25.624581 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5kw2s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-6687f8d877-tfvcz_openstack-operators(405115c4-bd24-4b05-b437-a8a27bc1f2b5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 17:28:25 crc kubenswrapper[4858]: E0202 17:28:25.625921 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-tfvcz" podUID="405115c4-bd24-4b05-b437-a8a27bc1f2b5" Feb 02 17:28:25 crc kubenswrapper[4858]: I0202 17:28:25.626165 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-8r9sc"] Feb 02 17:28:25 crc kubenswrapper[4858]: W0202 17:28:25.630515 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80f2567c_89d7_4350_a7f2_acd472bc2f68.slice/crio-21929aa0dbd6708089e8de9ee5cdce5da8cd71b0a6c6f41db08016e86dfd1797 WatchSource:0}: Error finding container 21929aa0dbd6708089e8de9ee5cdce5da8cd71b0a6c6f41db08016e86dfd1797: Status 404 returned error can't find the container with id 21929aa0dbd6708089e8de9ee5cdce5da8cd71b0a6c6f41db08016e86dfd1797 Feb 02 17:28:25 crc kubenswrapper[4858]: E0202 17:28:25.633931 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ht2ns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-8r9sc_openstack-operators(80f2567c-89d7-4350-a7f2-acd472bc2f68): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 17:28:25 crc kubenswrapper[4858]: E0202 17:28:25.635335 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-8r9sc" podUID="80f2567c-89d7-4350-a7f2-acd472bc2f68" Feb 02 17:28:25 crc kubenswrapper[4858]: I0202 17:28:25.694810 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5b2eeae9-b158-4d59-8056-b12e1a397d18-cert\") pod \"infra-operator-controller-manager-79955696d6-ck77w\" (UID: \"5b2eeae9-b158-4d59-8056-b12e1a397d18\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ck77w" Feb 02 17:28:25 crc kubenswrapper[4858]: E0202 17:28:25.695086 4858 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 17:28:25 crc kubenswrapper[4858]: E0202 17:28:25.695131 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b2eeae9-b158-4d59-8056-b12e1a397d18-cert podName:5b2eeae9-b158-4d59-8056-b12e1a397d18 nodeName:}" failed. No retries permitted until 2026-02-02 17:28:27.695117793 +0000 UTC m=+808.847533058 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5b2eeae9-b158-4d59-8056-b12e1a397d18-cert") pod "infra-operator-controller-manager-79955696d6-ck77w" (UID: "5b2eeae9-b158-4d59-8056-b12e1a397d18") : secret "infra-operator-webhook-server-cert" not found Feb 02 17:28:25 crc kubenswrapper[4858]: I0202 17:28:25.699456 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-z7cp9"] Feb 02 17:28:25 crc kubenswrapper[4858]: W0202 17:28:25.705115 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda72a0b3_6998_4d0e_b7d3_f4fce5f11f1b.slice/crio-fc2f93eee0f5abf2ed96ec07e130494fdff61f0ae7b207003dbd5d4ebc3ff607 WatchSource:0}: Error finding container fc2f93eee0f5abf2ed96ec07e130494fdff61f0ae7b207003dbd5d4ebc3ff607: Status 404 returned error can't find the container with id fc2f93eee0f5abf2ed96ec07e130494fdff61f0ae7b207003dbd5d4ebc3ff607 Feb 02 17:28:25 crc kubenswrapper[4858]: E0202 17:28:25.717190 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f5f64,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-z7cp9_openstack-operators(da72a0b3-6998-4d0e-b7d3-f4fce5f11f1b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 17:28:25 crc kubenswrapper[4858]: E0202 17:28:25.718806 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-z7cp9" podUID="da72a0b3-6998-4d0e-b7d3-f4fce5f11f1b" Feb 02 17:28:25 crc kubenswrapper[4858]: I0202 17:28:25.722907 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-rl5z2"] Feb 02 17:28:25 crc kubenswrapper[4858]: I0202 17:28:25.727446 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8lqf9"] Feb 02 17:28:25 crc kubenswrapper[4858]: W0202 17:28:25.734766 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3733a396_b067_4153_891a_1c5b044a7e04.slice/crio-eb3340620172ae2dfa4ccfff64c257d39c489a3feb6772b833d410cf820265d5 WatchSource:0}: Error finding container eb3340620172ae2dfa4ccfff64c257d39c489a3feb6772b833d410cf820265d5: Status 404 returned error can't find the container with id eb3340620172ae2dfa4ccfff64c257d39c489a3feb6772b833d410cf820265d5 Feb 02 17:28:25 crc kubenswrapper[4858]: E0202 17:28:25.734898 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fs6jr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-rl5z2_openstack-operators(467af09f-e1d2-407e-989e-606a3a3219b0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 17:28:25 crc kubenswrapper[4858]: E0202 17:28:25.735994 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-rl5z2" podUID="467af09f-e1d2-407e-989e-606a3a3219b0" Feb 02 17:28:25 crc kubenswrapper[4858]: E0202 17:28:25.737918 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g84sd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-8lqf9_openstack-operators(3733a396-b067-4153-891a-1c5b044a7e04): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 17:28:25 crc kubenswrapper[4858]: E0202 17:28:25.739379 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8lqf9" podUID="3733a396-b067-4153-891a-1c5b044a7e04" Feb 02 17:28:26 crc kubenswrapper[4858]: I0202 17:28:26.303173 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/00e707da-7230-4214-82a0-e1b18aad70a8-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dsvmkf\" (UID: \"00e707da-7230-4214-82a0-e1b18aad70a8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dsvmkf" Feb 02 17:28:26 crc kubenswrapper[4858]: E0202 17:28:26.303390 4858 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 17:28:26 crc kubenswrapper[4858]: E0202 17:28:26.303496 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00e707da-7230-4214-82a0-e1b18aad70a8-cert podName:00e707da-7230-4214-82a0-e1b18aad70a8 nodeName:}" failed. No retries permitted until 2026-02-02 17:28:28.303480148 +0000 UTC m=+809.455895413 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/00e707da-7230-4214-82a0-e1b18aad70a8-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dsvmkf" (UID: "00e707da-7230-4214-82a0-e1b18aad70a8") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 17:28:26 crc kubenswrapper[4858]: I0202 17:28:26.506584 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad13cd52-7254-489a-8960-511bbc2a3360-metrics-certs\") pod \"openstack-operator-controller-manager-86df59f79f-rczsp\" (UID: \"ad13cd52-7254-489a-8960-511bbc2a3360\") " pod="openstack-operators/openstack-operator-controller-manager-86df59f79f-rczsp" Feb 02 17:28:26 crc kubenswrapper[4858]: I0202 17:28:26.506635 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ad13cd52-7254-489a-8960-511bbc2a3360-webhook-certs\") pod \"openstack-operator-controller-manager-86df59f79f-rczsp\" (UID: \"ad13cd52-7254-489a-8960-511bbc2a3360\") " pod="openstack-operators/openstack-operator-controller-manager-86df59f79f-rczsp" Feb 02 17:28:26 crc kubenswrapper[4858]: E0202 17:28:26.507343 4858 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 17:28:26 crc kubenswrapper[4858]: E0202 17:28:26.507391 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad13cd52-7254-489a-8960-511bbc2a3360-metrics-certs podName:ad13cd52-7254-489a-8960-511bbc2a3360 nodeName:}" failed. No retries permitted until 2026-02-02 17:28:28.507377745 +0000 UTC m=+809.659793010 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad13cd52-7254-489a-8960-511bbc2a3360-metrics-certs") pod "openstack-operator-controller-manager-86df59f79f-rczsp" (UID: "ad13cd52-7254-489a-8960-511bbc2a3360") : secret "metrics-server-cert" not found Feb 02 17:28:26 crc kubenswrapper[4858]: E0202 17:28:26.507740 4858 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 17:28:26 crc kubenswrapper[4858]: E0202 17:28:26.507797 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad13cd52-7254-489a-8960-511bbc2a3360-webhook-certs podName:ad13cd52-7254-489a-8960-511bbc2a3360 nodeName:}" failed. No retries permitted until 2026-02-02 17:28:28.507781427 +0000 UTC m=+809.660196692 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ad13cd52-7254-489a-8960-511bbc2a3360-webhook-certs") pod "openstack-operator-controller-manager-86df59f79f-rczsp" (UID: "ad13cd52-7254-489a-8960-511bbc2a3360") : secret "webhook-server-cert" not found Feb 02 17:28:26 crc kubenswrapper[4858]: I0202 17:28:26.646424 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-8r9sc" event={"ID":"80f2567c-89d7-4350-a7f2-acd472bc2f68","Type":"ContainerStarted","Data":"21929aa0dbd6708089e8de9ee5cdce5da8cd71b0a6c6f41db08016e86dfd1797"} Feb 02 17:28:26 crc kubenswrapper[4858]: I0202 17:28:26.653198 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-tfvcz" event={"ID":"405115c4-bd24-4b05-b437-a8a27bc1f2b5","Type":"ContainerStarted","Data":"f662deb43b2fb5d2e97fac913990d07f62362156c623280831bd38a4cc55dd48"} Feb 02 17:28:26 crc kubenswrapper[4858]: E0202 17:28:26.655953 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-8r9sc" podUID="80f2567c-89d7-4350-a7f2-acd472bc2f68" Feb 02 17:28:26 crc kubenswrapper[4858]: E0202 17:28:26.656265 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-tfvcz" podUID="405115c4-bd24-4b05-b437-a8a27bc1f2b5" Feb 02 17:28:26 crc kubenswrapper[4858]: I0202 17:28:26.657026 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-4jd6l" event={"ID":"366ee9f4-9c6e-416a-8603-f6bac0530a6a","Type":"ContainerStarted","Data":"f558ef44d569fe7bb721b2784070c4f9b1c160e2c42ebe72a8780b4b607ac59a"} Feb 02 17:28:26 crc kubenswrapper[4858]: E0202 17:28:26.663423 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-4jd6l" podUID="366ee9f4-9c6e-416a-8603-f6bac0530a6a" Feb 02 17:28:26 crc kubenswrapper[4858]: I0202 17:28:26.665007 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-2p2q6" event={"ID":"2600f62e-5615-4217-9629-9b77846634f9","Type":"ContainerStarted","Data":"0497891a0724261489b79fa97cb15d1cc46fac70a28e7a8ecdb3b0b6ee9d2ffc"} Feb 02 17:28:26 crc kubenswrapper[4858]: I0202 17:28:26.668049 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-kffpf" event={"ID":"44678b87-d59f-4661-93c9-8e2ddb8ea61e","Type":"ContainerStarted","Data":"243c00c6ca853b51b231d6e27fe45a07f82d24b7354af4e7bcc5c6c82f862195"} Feb 02 17:28:26 crc kubenswrapper[4858]: I0202 17:28:26.671508 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-99rfw" event={"ID":"76ec111a-d121-411c-9d81-8fcfd6323d49","Type":"ContainerStarted","Data":"19780077c34de61ee8d455fe0efdc0a57e3cf15be443fa2748b971927a10625c"} Feb 02 17:28:26 crc kubenswrapper[4858]: E0202 17:28:26.674311 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10\\\"\"" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-99rfw" podUID="76ec111a-d121-411c-9d81-8fcfd6323d49" Feb 02 17:28:26 crc kubenswrapper[4858]: I0202 17:28:26.674449 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-zlfqp" event={"ID":"4a72e4a0-8e70-4d04-85c8-15b68840632d","Type":"ContainerStarted","Data":"cfc8b80ae17fecf977839ff67e123ae141b8dc8e1bbcb01df27ca5ebdbd4dd41"} Feb 02 17:28:26 crc kubenswrapper[4858]: I0202 17:28:26.682889 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-rmkrp" event={"ID":"8c70d2b3-c4e9-422f-ace6-f11450c068ec","Type":"ContainerStarted","Data":"d7c2afcb06b2f81188b36f34e10830ec9bc3d4e8d8d382bdd882d7801f70c4c4"} Feb 02 17:28:26 crc kubenswrapper[4858]: I0202 17:28:26.690569 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-rl5z2" event={"ID":"467af09f-e1d2-407e-989e-606a3a3219b0","Type":"ContainerStarted","Data":"8a9ee97457cfd8659d07cac25c169b49105f0e9eee3d11482529b01c3f27457f"} Feb 02 17:28:26 crc kubenswrapper[4858]: E0202 17:28:26.695343 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-rl5z2" podUID="467af09f-e1d2-407e-989e-606a3a3219b0" Feb 02 17:28:26 crc kubenswrapper[4858]: I0202 17:28:26.696677 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-z7cp9" event={"ID":"da72a0b3-6998-4d0e-b7d3-f4fce5f11f1b","Type":"ContainerStarted","Data":"fc2f93eee0f5abf2ed96ec07e130494fdff61f0ae7b207003dbd5d4ebc3ff607"} Feb 02 17:28:26 crc kubenswrapper[4858]: I0202 17:28:26.703804 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-vpbp7" event={"ID":"f5578b04-55cc-4bb9-a3f5-27e63ffe0c27","Type":"ContainerStarted","Data":"70c068643794a42dc419d72c07ae276f44793eb33b572f064c8e2c6898eaf599"} Feb 02 17:28:26 crc kubenswrapper[4858]: E0202 17:28:26.704478 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-z7cp9" podUID="da72a0b3-6998-4d0e-b7d3-f4fce5f11f1b" Feb 02 17:28:26 crc kubenswrapper[4858]: I0202 17:28:26.710763 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8lqf9" event={"ID":"3733a396-b067-4153-891a-1c5b044a7e04","Type":"ContainerStarted","Data":"eb3340620172ae2dfa4ccfff64c257d39c489a3feb6772b833d410cf820265d5"} Feb 02 17:28:26 crc kubenswrapper[4858]: I0202 17:28:26.712710 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-srbzf" event={"ID":"16a9ca97-2b15-4a52-8d2c-eb170a3f2b75","Type":"ContainerStarted","Data":"df9cfe74e81b13dcd03793e986de6959ceeec370481a1436ff5098acd5d05a31"} Feb 02 17:28:26 crc kubenswrapper[4858]: E0202 17:28:26.717901 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8lqf9" podUID="3733a396-b067-4153-891a-1c5b044a7e04" Feb 02 17:28:27 crc kubenswrapper[4858]: E0202 17:28:27.727723 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-rl5z2" podUID="467af09f-e1d2-407e-989e-606a3a3219b0" Feb 02 17:28:27 crc kubenswrapper[4858]: E0202 17:28:27.728145 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-z7cp9" podUID="da72a0b3-6998-4d0e-b7d3-f4fce5f11f1b" Feb 02 17:28:27 crc kubenswrapper[4858]: E0202 17:28:27.728188 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-4jd6l" podUID="366ee9f4-9c6e-416a-8603-f6bac0530a6a" Feb 02 17:28:27 crc kubenswrapper[4858]: E0202 17:28:27.728227 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10\\\"\"" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-99rfw" podUID="76ec111a-d121-411c-9d81-8fcfd6323d49" Feb 02 17:28:27 crc kubenswrapper[4858]: E0202 17:28:27.728248 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-8r9sc" podUID="80f2567c-89d7-4350-a7f2-acd472bc2f68" Feb 02 17:28:27 crc kubenswrapper[4858]: E0202 17:28:27.728731 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-tfvcz" podUID="405115c4-bd24-4b05-b437-a8a27bc1f2b5" Feb 02 17:28:27 crc kubenswrapper[4858]: E0202 17:28:27.729623 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8lqf9" podUID="3733a396-b067-4153-891a-1c5b044a7e04" Feb 02 17:28:27 crc kubenswrapper[4858]: I0202 17:28:27.733698 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5b2eeae9-b158-4d59-8056-b12e1a397d18-cert\") pod \"infra-operator-controller-manager-79955696d6-ck77w\" (UID: \"5b2eeae9-b158-4d59-8056-b12e1a397d18\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ck77w" Feb 02 17:28:27 crc kubenswrapper[4858]: E0202 17:28:27.734424 4858 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 17:28:27 crc kubenswrapper[4858]: E0202 17:28:27.734489 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b2eeae9-b158-4d59-8056-b12e1a397d18-cert podName:5b2eeae9-b158-4d59-8056-b12e1a397d18 nodeName:}" failed. No retries permitted until 2026-02-02 17:28:31.734471457 +0000 UTC m=+812.886886812 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5b2eeae9-b158-4d59-8056-b12e1a397d18-cert") pod "infra-operator-controller-manager-79955696d6-ck77w" (UID: "5b2eeae9-b158-4d59-8056-b12e1a397d18") : secret "infra-operator-webhook-server-cert" not found Feb 02 17:28:27 crc kubenswrapper[4858]: I0202 17:28:27.809714 4858 patch_prober.go:28] interesting pod/machine-config-daemon-lbvl2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 17:28:27 crc kubenswrapper[4858]: I0202 17:28:27.809767 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 17:28:28 crc kubenswrapper[4858]: I0202 17:28:28.340250 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/00e707da-7230-4214-82a0-e1b18aad70a8-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dsvmkf\" (UID: \"00e707da-7230-4214-82a0-e1b18aad70a8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dsvmkf" Feb 02 17:28:28 crc kubenswrapper[4858]: E0202 17:28:28.340394 4858 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 17:28:28 crc kubenswrapper[4858]: E0202 17:28:28.340445 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00e707da-7230-4214-82a0-e1b18aad70a8-cert podName:00e707da-7230-4214-82a0-e1b18aad70a8 nodeName:}" failed. No retries permitted until 2026-02-02 17:28:32.340430702 +0000 UTC m=+813.492845967 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/00e707da-7230-4214-82a0-e1b18aad70a8-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dsvmkf" (UID: "00e707da-7230-4214-82a0-e1b18aad70a8") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 17:28:28 crc kubenswrapper[4858]: I0202 17:28:28.542521 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad13cd52-7254-489a-8960-511bbc2a3360-metrics-certs\") pod \"openstack-operator-controller-manager-86df59f79f-rczsp\" (UID: \"ad13cd52-7254-489a-8960-511bbc2a3360\") " pod="openstack-operators/openstack-operator-controller-manager-86df59f79f-rczsp" Feb 02 17:28:28 crc kubenswrapper[4858]: I0202 17:28:28.542610 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ad13cd52-7254-489a-8960-511bbc2a3360-webhook-certs\") pod \"openstack-operator-controller-manager-86df59f79f-rczsp\" (UID: \"ad13cd52-7254-489a-8960-511bbc2a3360\") " pod="openstack-operators/openstack-operator-controller-manager-86df59f79f-rczsp" Feb 02 17:28:28 crc kubenswrapper[4858]: E0202 17:28:28.542831 4858 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 17:28:28 crc kubenswrapper[4858]: E0202 17:28:28.542897 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad13cd52-7254-489a-8960-511bbc2a3360-webhook-certs podName:ad13cd52-7254-489a-8960-511bbc2a3360 nodeName:}" failed. No retries permitted until 2026-02-02 17:28:32.542875387 +0000 UTC m=+813.695290672 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ad13cd52-7254-489a-8960-511bbc2a3360-webhook-certs") pod "openstack-operator-controller-manager-86df59f79f-rczsp" (UID: "ad13cd52-7254-489a-8960-511bbc2a3360") : secret "webhook-server-cert" not found Feb 02 17:28:28 crc kubenswrapper[4858]: E0202 17:28:28.543609 4858 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 17:28:28 crc kubenswrapper[4858]: E0202 17:28:28.543703 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad13cd52-7254-489a-8960-511bbc2a3360-metrics-certs podName:ad13cd52-7254-489a-8960-511bbc2a3360 nodeName:}" failed. No retries permitted until 2026-02-02 17:28:32.54365459 +0000 UTC m=+813.696069865 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad13cd52-7254-489a-8960-511bbc2a3360-metrics-certs") pod "openstack-operator-controller-manager-86df59f79f-rczsp" (UID: "ad13cd52-7254-489a-8960-511bbc2a3360") : secret "metrics-server-cert" not found Feb 02 17:28:31 crc kubenswrapper[4858]: I0202 17:28:31.808244 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5b2eeae9-b158-4d59-8056-b12e1a397d18-cert\") pod \"infra-operator-controller-manager-79955696d6-ck77w\" (UID: \"5b2eeae9-b158-4d59-8056-b12e1a397d18\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ck77w" Feb 02 17:28:31 crc kubenswrapper[4858]: E0202 17:28:31.808426 4858 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 17:28:31 crc kubenswrapper[4858]: E0202 17:28:31.808701 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b2eeae9-b158-4d59-8056-b12e1a397d18-cert podName:5b2eeae9-b158-4d59-8056-b12e1a397d18 nodeName:}" failed. No retries permitted until 2026-02-02 17:28:39.808681325 +0000 UTC m=+820.961096590 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5b2eeae9-b158-4d59-8056-b12e1a397d18-cert") pod "infra-operator-controller-manager-79955696d6-ck77w" (UID: "5b2eeae9-b158-4d59-8056-b12e1a397d18") : secret "infra-operator-webhook-server-cert" not found Feb 02 17:28:32 crc kubenswrapper[4858]: I0202 17:28:32.416733 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/00e707da-7230-4214-82a0-e1b18aad70a8-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dsvmkf\" (UID: \"00e707da-7230-4214-82a0-e1b18aad70a8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dsvmkf" Feb 02 17:28:32 crc kubenswrapper[4858]: E0202 17:28:32.416901 4858 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 17:28:32 crc kubenswrapper[4858]: E0202 17:28:32.416961 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00e707da-7230-4214-82a0-e1b18aad70a8-cert podName:00e707da-7230-4214-82a0-e1b18aad70a8 nodeName:}" failed. No retries permitted until 2026-02-02 17:28:40.416944785 +0000 UTC m=+821.569360050 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/00e707da-7230-4214-82a0-e1b18aad70a8-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dsvmkf" (UID: "00e707da-7230-4214-82a0-e1b18aad70a8") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 17:28:32 crc kubenswrapper[4858]: I0202 17:28:32.620163 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad13cd52-7254-489a-8960-511bbc2a3360-metrics-certs\") pod \"openstack-operator-controller-manager-86df59f79f-rczsp\" (UID: \"ad13cd52-7254-489a-8960-511bbc2a3360\") " pod="openstack-operators/openstack-operator-controller-manager-86df59f79f-rczsp" Feb 02 17:28:32 crc kubenswrapper[4858]: I0202 17:28:32.620233 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ad13cd52-7254-489a-8960-511bbc2a3360-webhook-certs\") pod \"openstack-operator-controller-manager-86df59f79f-rczsp\" (UID: \"ad13cd52-7254-489a-8960-511bbc2a3360\") " pod="openstack-operators/openstack-operator-controller-manager-86df59f79f-rczsp" Feb 02 17:28:32 crc kubenswrapper[4858]: E0202 17:28:32.620328 4858 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 17:28:32 crc kubenswrapper[4858]: E0202 17:28:32.620415 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad13cd52-7254-489a-8960-511bbc2a3360-metrics-certs podName:ad13cd52-7254-489a-8960-511bbc2a3360 nodeName:}" failed. No retries permitted until 2026-02-02 17:28:40.62039513 +0000 UTC m=+821.772810395 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad13cd52-7254-489a-8960-511bbc2a3360-metrics-certs") pod "openstack-operator-controller-manager-86df59f79f-rczsp" (UID: "ad13cd52-7254-489a-8960-511bbc2a3360") : secret "metrics-server-cert" not found Feb 02 17:28:32 crc kubenswrapper[4858]: E0202 17:28:32.620428 4858 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 17:28:32 crc kubenswrapper[4858]: E0202 17:28:32.620706 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad13cd52-7254-489a-8960-511bbc2a3360-webhook-certs podName:ad13cd52-7254-489a-8960-511bbc2a3360 nodeName:}" failed. No retries permitted until 2026-02-02 17:28:40.620685499 +0000 UTC m=+821.773100824 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ad13cd52-7254-489a-8960-511bbc2a3360-webhook-certs") pod "openstack-operator-controller-manager-86df59f79f-rczsp" (UID: "ad13cd52-7254-489a-8960-511bbc2a3360") : secret "webhook-server-cert" not found Feb 02 17:28:36 crc kubenswrapper[4858]: I0202 17:28:36.807962 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-vpbp7" event={"ID":"f5578b04-55cc-4bb9-a3f5-27e63ffe0c27","Type":"ContainerStarted","Data":"f6436fb9dd666063faa4d9de1fd6e73e6d1446ff25fe0623ef39fbf460ac1ea9"} Feb 02 17:28:36 crc kubenswrapper[4858]: I0202 17:28:36.808718 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-vpbp7" Feb 02 17:28:36 crc kubenswrapper[4858]: I0202 17:28:36.809746 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-zlfqp" event={"ID":"4a72e4a0-8e70-4d04-85c8-15b68840632d","Type":"ContainerStarted","Data":"6f1798d3a872a872428d171f28d25caa623226cedb8d245b4054f4bd08bc5e94"} Feb 02 17:28:36 crc kubenswrapper[4858]: I0202 17:28:36.809917 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-zlfqp" Feb 02 17:28:36 crc kubenswrapper[4858]: I0202 17:28:36.811460 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-7cq6h" event={"ID":"096752c5-391b-4370-b5f6-39ef63d6878e","Type":"ContainerStarted","Data":"ccf333fff9fe7c8dfac81deb87b514bf5599421108f0e0393c32c24afbe145dd"} Feb 02 17:28:36 crc kubenswrapper[4858]: I0202 17:28:36.811579 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-7cq6h" Feb 02 17:28:36 crc kubenswrapper[4858]: I0202 17:28:36.813156 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-srbzf" event={"ID":"16a9ca97-2b15-4a52-8d2c-eb170a3f2b75","Type":"ContainerStarted","Data":"875e0e04f79ecc898cb579c3f544ce6737aa205e3aab2f3af053edeabc805d37"} Feb 02 17:28:36 crc kubenswrapper[4858]: I0202 17:28:36.813211 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-srbzf" Feb 02 17:28:36 crc kubenswrapper[4858]: I0202 17:28:36.814913 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-kcgxf" event={"ID":"8eca62a8-4909-4402-89ff-bd59ad42daef","Type":"ContainerStarted","Data":"de7a0d90a8c8c950f453492df3011e363126ec7c273fcdc52fb1e5f39ff060dd"} Feb 02 17:28:36 crc kubenswrapper[4858]: I0202 17:28:36.814984 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-kcgxf" Feb 02 17:28:36 crc kubenswrapper[4858]: I0202 17:28:36.816669 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-r786j" event={"ID":"a7c0be68-b4e3-47dc-b6c0-acd8878465ee","Type":"ContainerStarted","Data":"5d721741ba2f9584ea3ca1db78ec2c2683cdf734e017e72fa1cc29f02cc0234b"} Feb 02 17:28:36 crc kubenswrapper[4858]: I0202 17:28:36.816775 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-r786j" Feb 02 17:28:36 crc kubenswrapper[4858]: I0202 17:28:36.818484 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-2p2q6" event={"ID":"2600f62e-5615-4217-9629-9b77846634f9","Type":"ContainerStarted","Data":"1515436db0d80697e1dbbdf1e418326270f026cf2c546513cb1f716f30c20146"} Feb 02 17:28:36 crc kubenswrapper[4858]: I0202 17:28:36.818540 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-2p2q6" Feb 02 17:28:36 crc kubenswrapper[4858]: I0202 17:28:36.819880 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-kffpf" event={"ID":"44678b87-d59f-4661-93c9-8e2ddb8ea61e","Type":"ContainerStarted","Data":"45358b1f7933707b5c01e25f10d38066541b637a7e6807d71927bf135e7bc3c6"} Feb 02 17:28:36 crc kubenswrapper[4858]: I0202 17:28:36.819988 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-kffpf" Feb 02 17:28:36 crc kubenswrapper[4858]: I0202 17:28:36.821464 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-p9qwv" event={"ID":"ad1072ec-d0e8-49ff-9971-8f6589bde802","Type":"ContainerStarted","Data":"8a0ee6a9701f095f0121d2c9c2eb4c6bd56c2264b3502b561864d9aa6252ffc6"} Feb 02 17:28:36 crc kubenswrapper[4858]: I0202 17:28:36.821558 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-p9qwv" Feb 02 17:28:36 crc kubenswrapper[4858]: I0202 17:28:36.823343 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-kcbss" event={"ID":"f700cc0f-80eb-46a5-b7d3-b32dccdc2f49","Type":"ContainerStarted","Data":"93b4b187940262dc12345b7c3d9706f4219acff41f2bf554b3ce5784aea54cbd"} Feb 02 17:28:36 crc kubenswrapper[4858]: I0202 17:28:36.823396 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-kcbss" Feb 02 17:28:36 crc kubenswrapper[4858]: I0202 17:28:36.824477 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-rmkrp" event={"ID":"8c70d2b3-c4e9-422f-ace6-f11450c068ec","Type":"ContainerStarted","Data":"3b26598178e0367aba679c37e0ead6c94f3c9015cf31184c5e37967c9dbafb02"} Feb 02 17:28:36 crc kubenswrapper[4858]: I0202 17:28:36.824575 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-rmkrp" Feb 02 17:28:36 crc kubenswrapper[4858]: I0202 17:28:36.825930 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-2g5g6" event={"ID":"b6d0d2c9-a689-4bcf-b3c8-b8aa25e47898","Type":"ContainerStarted","Data":"b235399ed7e673f454cd0dd54c35b40e7d04d496a8d57429a467001242faf4ec"} Feb 02 17:28:36 crc kubenswrapper[4858]: I0202 17:28:36.826044 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-2g5g6" Feb 02 17:28:36 crc kubenswrapper[4858]: I0202 17:28:36.827540 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-rgpmv" event={"ID":"e61e293a-bb2a-4ccd-bc20-815cc2bfb01b","Type":"ContainerStarted","Data":"325e221f8ca9160a637d14cd768bba73f9f49cf7b41637df895377646cee456f"} Feb 02 17:28:36 crc kubenswrapper[4858]: I0202 17:28:36.827693 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-rgpmv" Feb 02 17:28:36 crc kubenswrapper[4858]: I0202 17:28:36.829236 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-vpbp7" podStartSLOduration=3.854665414 podStartE2EDuration="13.829222942s" podCreationTimestamp="2026-02-02 17:28:23 +0000 UTC" firstStartedPulling="2026-02-02 17:28:25.592337025 +0000 UTC m=+806.744752290" lastFinishedPulling="2026-02-02 17:28:35.566894553 +0000 UTC m=+816.719309818" observedRunningTime="2026-02-02 17:28:36.827306916 +0000 UTC m=+817.979722191" watchObservedRunningTime="2026-02-02 17:28:36.829222942 +0000 UTC m=+817.981638207" Feb 02 17:28:36 crc kubenswrapper[4858]: I0202 17:28:36.877321 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-kcgxf" podStartSLOduration=3.321658718 podStartE2EDuration="13.877301925s" podCreationTimestamp="2026-02-02 17:28:23 +0000 UTC" firstStartedPulling="2026-02-02 17:28:25.011242036 +0000 UTC m=+806.163657301" lastFinishedPulling="2026-02-02 17:28:35.566885243 +0000 UTC m=+816.719300508" observedRunningTime="2026-02-02 17:28:36.872748152 +0000 UTC m=+818.025163427" watchObservedRunningTime="2026-02-02 17:28:36.877301925 +0000 UTC m=+818.029717200" Feb 02 17:28:36 crc kubenswrapper[4858]: I0202 17:28:36.878568 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-zlfqp" podStartSLOduration=2.919364261 podStartE2EDuration="12.878562501s" podCreationTimestamp="2026-02-02 17:28:24 +0000 UTC" firstStartedPulling="2026-02-02 17:28:25.608069884 +0000 UTC m=+806.760485159" lastFinishedPulling="2026-02-02 17:28:35.567268134 +0000 UTC m=+816.719683399" observedRunningTime="2026-02-02 17:28:36.858417634 +0000 UTC m=+818.010832919" watchObservedRunningTime="2026-02-02 17:28:36.878562501 +0000 UTC m=+818.030977776" Feb 02 17:28:36 crc kubenswrapper[4858]: I0202 17:28:36.898662 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-7cq6h" podStartSLOduration=3.759612641 podStartE2EDuration="13.898642157s" podCreationTimestamp="2026-02-02 17:28:23 +0000 UTC" firstStartedPulling="2026-02-02 17:28:25.382667529 +0000 UTC m=+806.535082794" lastFinishedPulling="2026-02-02 17:28:35.521697045 +0000 UTC m=+816.674112310" observedRunningTime="2026-02-02 17:28:36.895929378 +0000 UTC m=+818.048344663" watchObservedRunningTime="2026-02-02 17:28:36.898642157 +0000 UTC m=+818.051057422" Feb 02 17:28:36 crc kubenswrapper[4858]: I0202 17:28:36.945170 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-rmkrp" podStartSLOduration=3.943969289 podStartE2EDuration="13.945149924s" podCreationTimestamp="2026-02-02 17:28:23 +0000 UTC" firstStartedPulling="2026-02-02 17:28:25.596335901 +0000 UTC m=+806.748751166" lastFinishedPulling="2026-02-02 17:28:35.597516546 +0000 UTC m=+816.749931801" observedRunningTime="2026-02-02 17:28:36.944090173 +0000 UTC m=+818.096505458" watchObservedRunningTime="2026-02-02 17:28:36.945149924 +0000 UTC m=+818.097565189" Feb 02 17:28:36 crc kubenswrapper[4858]: I0202 17:28:36.947249 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-p9qwv" podStartSLOduration=3.748232809 podStartE2EDuration="13.947235754s" podCreationTimestamp="2026-02-02 17:28:23 +0000 UTC" firstStartedPulling="2026-02-02 17:28:25.360613886 +0000 UTC m=+806.513029151" lastFinishedPulling="2026-02-02 17:28:35.559616841 +0000 UTC m=+816.712032096" observedRunningTime="2026-02-02 17:28:36.928871709 +0000 UTC m=+818.081286974" watchObservedRunningTime="2026-02-02 17:28:36.947235754 +0000 UTC m=+818.099651019" Feb 02 17:28:36 crc kubenswrapper[4858]: I0202 17:28:36.997309 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-r786j" podStartSLOduration=3.527949454 podStartE2EDuration="13.997287034s" podCreationTimestamp="2026-02-02 17:28:23 +0000 UTC" firstStartedPulling="2026-02-02 17:28:25.090280731 +0000 UTC m=+806.242695996" lastFinishedPulling="2026-02-02 17:28:35.559618311 +0000 UTC m=+816.712033576" observedRunningTime="2026-02-02 17:28:36.992042931 +0000 UTC m=+818.144458196" watchObservedRunningTime="2026-02-02 17:28:36.997287034 +0000 UTC m=+818.149702299" Feb 02 17:28:36 crc kubenswrapper[4858]: I0202 17:28:36.997406 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-kcbss" podStartSLOduration=3.496696924 podStartE2EDuration="13.997401318s" podCreationTimestamp="2026-02-02 17:28:23 +0000 UTC" firstStartedPulling="2026-02-02 17:28:25.02096069 +0000 UTC m=+806.173375965" lastFinishedPulling="2026-02-02 17:28:35.521665094 +0000 UTC m=+816.674080359" observedRunningTime="2026-02-02 17:28:36.977405554 +0000 UTC m=+818.129820819" watchObservedRunningTime="2026-02-02 17:28:36.997401318 +0000 UTC m=+818.149816583" Feb 02 17:28:37 crc kubenswrapper[4858]: I0202 17:28:37.008653 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-2g5g6" podStartSLOduration=3.7500490319999997 podStartE2EDuration="14.008630935s" podCreationTimestamp="2026-02-02 17:28:23 +0000 UTC" firstStartedPulling="2026-02-02 17:28:25.370863155 +0000 UTC m=+806.523278420" lastFinishedPulling="2026-02-02 17:28:35.629445058 +0000 UTC m=+816.781860323" observedRunningTime="2026-02-02 17:28:37.004892656 +0000 UTC m=+818.157307931" watchObservedRunningTime="2026-02-02 17:28:37.008630935 +0000 UTC m=+818.161046210" Feb 02 17:28:37 crc kubenswrapper[4858]: I0202 17:28:37.023960 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-kffpf" podStartSLOduration=4.075737142 podStartE2EDuration="14.023944822s" podCreationTimestamp="2026-02-02 17:28:23 +0000 UTC" firstStartedPulling="2026-02-02 17:28:25.573426693 +0000 UTC m=+806.725841958" lastFinishedPulling="2026-02-02 17:28:35.521634373 +0000 UTC m=+816.674049638" observedRunningTime="2026-02-02 17:28:37.022319445 +0000 UTC m=+818.174734730" watchObservedRunningTime="2026-02-02 17:28:37.023944822 +0000 UTC m=+818.176360087" Feb 02 17:28:37 crc kubenswrapper[4858]: I0202 17:28:37.064789 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-srbzf" podStartSLOduration=3.167423707 podStartE2EDuration="13.064769283s" podCreationTimestamp="2026-02-02 17:28:24 +0000 UTC" firstStartedPulling="2026-02-02 17:28:25.624298427 +0000 UTC m=+806.776713692" lastFinishedPulling="2026-02-02 17:28:35.521644003 +0000 UTC m=+816.674059268" observedRunningTime="2026-02-02 17:28:37.046667945 +0000 UTC m=+818.199083200" watchObservedRunningTime="2026-02-02 17:28:37.064769283 +0000 UTC m=+818.217184548" Feb 02 17:28:37 crc kubenswrapper[4858]: I0202 17:28:37.067567 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-2p2q6" podStartSLOduration=4.127735948 podStartE2EDuration="14.067558814s" podCreationTimestamp="2026-02-02 17:28:23 +0000 UTC" firstStartedPulling="2026-02-02 17:28:25.581866789 +0000 UTC m=+806.734282054" lastFinishedPulling="2026-02-02 17:28:35.521689655 +0000 UTC m=+816.674104920" observedRunningTime="2026-02-02 17:28:37.065277157 +0000 UTC m=+818.217692422" watchObservedRunningTime="2026-02-02 17:28:37.067558814 +0000 UTC m=+818.219974079" Feb 02 17:28:39 crc kubenswrapper[4858]: I0202 17:28:39.832261 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5b2eeae9-b158-4d59-8056-b12e1a397d18-cert\") pod \"infra-operator-controller-manager-79955696d6-ck77w\" (UID: \"5b2eeae9-b158-4d59-8056-b12e1a397d18\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ck77w" Feb 02 17:28:39 crc kubenswrapper[4858]: I0202 17:28:39.847358 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5b2eeae9-b158-4d59-8056-b12e1a397d18-cert\") pod \"infra-operator-controller-manager-79955696d6-ck77w\" (UID: \"5b2eeae9-b158-4d59-8056-b12e1a397d18\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-ck77w" Feb 02 17:28:39 crc kubenswrapper[4858]: I0202 17:28:39.887965 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-ck77w" Feb 02 17:28:40 crc kubenswrapper[4858]: I0202 17:28:40.339385 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-rgpmv" podStartSLOduration=7.039363353 podStartE2EDuration="17.339360224s" podCreationTimestamp="2026-02-02 17:28:23 +0000 UTC" firstStartedPulling="2026-02-02 17:28:25.221636412 +0000 UTC m=+806.374051677" lastFinishedPulling="2026-02-02 17:28:35.521633283 +0000 UTC m=+816.674048548" observedRunningTime="2026-02-02 17:28:37.084268451 +0000 UTC m=+818.236683716" watchObservedRunningTime="2026-02-02 17:28:40.339360224 +0000 UTC m=+821.491775499" Feb 02 17:28:40 crc kubenswrapper[4858]: I0202 17:28:40.343693 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-ck77w"] Feb 02 17:28:40 crc kubenswrapper[4858]: I0202 17:28:40.448142 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/00e707da-7230-4214-82a0-e1b18aad70a8-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dsvmkf\" (UID: \"00e707da-7230-4214-82a0-e1b18aad70a8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dsvmkf" Feb 02 17:28:40 crc kubenswrapper[4858]: I0202 17:28:40.454489 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/00e707da-7230-4214-82a0-e1b18aad70a8-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dsvmkf\" (UID: \"00e707da-7230-4214-82a0-e1b18aad70a8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dsvmkf" Feb 02 17:28:40 crc kubenswrapper[4858]: I0202 17:28:40.530337 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dsvmkf" Feb 02 17:28:40 crc kubenswrapper[4858]: I0202 17:28:40.651194 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad13cd52-7254-489a-8960-511bbc2a3360-metrics-certs\") pod \"openstack-operator-controller-manager-86df59f79f-rczsp\" (UID: \"ad13cd52-7254-489a-8960-511bbc2a3360\") " pod="openstack-operators/openstack-operator-controller-manager-86df59f79f-rczsp" Feb 02 17:28:40 crc kubenswrapper[4858]: I0202 17:28:40.651246 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ad13cd52-7254-489a-8960-511bbc2a3360-webhook-certs\") pod \"openstack-operator-controller-manager-86df59f79f-rczsp\" (UID: \"ad13cd52-7254-489a-8960-511bbc2a3360\") " pod="openstack-operators/openstack-operator-controller-manager-86df59f79f-rczsp" Feb 02 17:28:40 crc kubenswrapper[4858]: E0202 17:28:40.651432 4858 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 17:28:40 crc kubenswrapper[4858]: E0202 17:28:40.651519 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad13cd52-7254-489a-8960-511bbc2a3360-webhook-certs podName:ad13cd52-7254-489a-8960-511bbc2a3360 nodeName:}" failed. No retries permitted until 2026-02-02 17:28:56.651469688 +0000 UTC m=+837.803884953 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ad13cd52-7254-489a-8960-511bbc2a3360-webhook-certs") pod "openstack-operator-controller-manager-86df59f79f-rczsp" (UID: "ad13cd52-7254-489a-8960-511bbc2a3360") : secret "webhook-server-cert" not found Feb 02 17:28:40 crc kubenswrapper[4858]: E0202 17:28:40.652170 4858 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 17:28:40 crc kubenswrapper[4858]: E0202 17:28:40.652209 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad13cd52-7254-489a-8960-511bbc2a3360-metrics-certs podName:ad13cd52-7254-489a-8960-511bbc2a3360 nodeName:}" failed. No retries permitted until 2026-02-02 17:28:56.652197879 +0000 UTC m=+837.804613144 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad13cd52-7254-489a-8960-511bbc2a3360-metrics-certs") pod "openstack-operator-controller-manager-86df59f79f-rczsp" (UID: "ad13cd52-7254-489a-8960-511bbc2a3360") : secret "metrics-server-cert" not found Feb 02 17:28:40 crc kubenswrapper[4858]: I0202 17:28:40.869601 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-ck77w" event={"ID":"5b2eeae9-b158-4d59-8056-b12e1a397d18","Type":"ContainerStarted","Data":"b9bc7e7b6235fea2f133e75147052406f46806416a59384536613bbaf09488ed"} Feb 02 17:28:41 crc kubenswrapper[4858]: I0202 17:28:41.001126 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dsvmkf"] Feb 02 17:28:41 crc kubenswrapper[4858]: W0202 17:28:41.011436 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00e707da_7230_4214_82a0_e1b18aad70a8.slice/crio-947c2d6418717f5126133e468c080505066a8a855944420ed28b945f34d47409 WatchSource:0}: Error finding container 947c2d6418717f5126133e468c080505066a8a855944420ed28b945f34d47409: Status 404 returned error can't find the container with id 947c2d6418717f5126133e468c080505066a8a855944420ed28b945f34d47409 Feb 02 17:28:41 crc kubenswrapper[4858]: I0202 17:28:41.878257 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dsvmkf" event={"ID":"00e707da-7230-4214-82a0-e1b18aad70a8","Type":"ContainerStarted","Data":"947c2d6418717f5126133e468c080505066a8a855944420ed28b945f34d47409"} Feb 02 17:28:44 crc kubenswrapper[4858]: I0202 17:28:44.174757 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-r786j" Feb 02 17:28:44 crc kubenswrapper[4858]: I0202 17:28:44.186134 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-rgpmv" Feb 02 17:28:44 crc kubenswrapper[4858]: I0202 17:28:44.225179 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-kcbss" Feb 02 17:28:44 crc kubenswrapper[4858]: I0202 17:28:44.258164 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-kcgxf" Feb 02 17:28:44 crc kubenswrapper[4858]: I0202 17:28:44.321328 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-kffpf" Feb 02 17:28:44 crc kubenswrapper[4858]: I0202 17:28:44.346942 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-rmkrp" Feb 02 17:28:44 crc kubenswrapper[4858]: I0202 17:28:44.381049 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-7cq6h" Feb 02 17:28:44 crc kubenswrapper[4858]: I0202 17:28:44.390005 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-2p2q6" Feb 02 17:28:44 crc kubenswrapper[4858]: I0202 17:28:44.524745 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-p9qwv" Feb 02 17:28:44 crc kubenswrapper[4858]: I0202 17:28:44.602855 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-vpbp7" Feb 02 17:28:44 crc kubenswrapper[4858]: I0202 17:28:44.645400 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-2g5g6" Feb 02 17:28:44 crc kubenswrapper[4858]: I0202 17:28:44.879850 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-zlfqp" Feb 02 17:28:44 crc kubenswrapper[4858]: I0202 17:28:44.953765 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-srbzf" Feb 02 17:28:48 crc kubenswrapper[4858]: I0202 17:28:48.936443 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-rl5z2" event={"ID":"467af09f-e1d2-407e-989e-606a3a3219b0","Type":"ContainerStarted","Data":"0ca71ce914a7baff72650d605906052fad8200968863c57d0fbae037b0adabf8"} Feb 02 17:28:48 crc kubenswrapper[4858]: I0202 17:28:48.937471 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-rl5z2" Feb 02 17:28:48 crc kubenswrapper[4858]: I0202 17:28:48.938005 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dsvmkf" event={"ID":"00e707da-7230-4214-82a0-e1b18aad70a8","Type":"ContainerStarted","Data":"fe6a50a5f1304a0d68e2aa3a3409659b0f37778af757c8b2f016c1450c0d3a35"} Feb 02 17:28:48 crc kubenswrapper[4858]: I0202 17:28:48.938259 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dsvmkf" Feb 02 17:28:48 crc kubenswrapper[4858]: I0202 17:28:48.940119 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-99rfw" event={"ID":"76ec111a-d121-411c-9d81-8fcfd6323d49","Type":"ContainerStarted","Data":"6d607f137ad4fd7767cc1fef66a4bcfabbba9fd11af2c4acc71b4990e32f5f9f"} Feb 02 17:28:48 crc kubenswrapper[4858]: I0202 17:28:48.940327 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-99rfw" Feb 02 17:28:48 crc kubenswrapper[4858]: I0202 17:28:48.941777 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8lqf9" event={"ID":"3733a396-b067-4153-891a-1c5b044a7e04","Type":"ContainerStarted","Data":"b01634ec9fcda95e8431627c14f81d325e0f5d86db23fa256e27245c8daff939"} Feb 02 17:28:48 crc kubenswrapper[4858]: I0202 17:28:48.943761 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-ck77w" event={"ID":"5b2eeae9-b158-4d59-8056-b12e1a397d18","Type":"ContainerStarted","Data":"36cdde1cc8dae69c2bfa52450865b57ee89a4a81b5b34e72087e3ef8e514869c"} Feb 02 17:28:48 crc kubenswrapper[4858]: I0202 17:28:48.943877 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-ck77w" Feb 02 17:28:48 crc kubenswrapper[4858]: I0202 17:28:48.945163 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-8r9sc" event={"ID":"80f2567c-89d7-4350-a7f2-acd472bc2f68","Type":"ContainerStarted","Data":"9ad74b4aacbb35a2ae161b0f288ea9ccb3aa7cbe3b98aac328f8952d0ead3b0e"} Feb 02 17:28:48 crc kubenswrapper[4858]: I0202 17:28:48.945371 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-8r9sc" Feb 02 17:28:48 crc kubenswrapper[4858]: I0202 17:28:48.947508 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-tfvcz" event={"ID":"405115c4-bd24-4b05-b437-a8a27bc1f2b5","Type":"ContainerStarted","Data":"40b54bede5277787ba9e612679b60c6b5975619163929ca49aea3f124a562af6"} Feb 02 17:28:48 crc kubenswrapper[4858]: I0202 17:28:48.948023 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-tfvcz" Feb 02 17:28:48 crc kubenswrapper[4858]: I0202 17:28:48.949558 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-z7cp9" event={"ID":"da72a0b3-6998-4d0e-b7d3-f4fce5f11f1b","Type":"ContainerStarted","Data":"a04d34687d59c077924a2584257db6fb105b20b7d34814a93642ba893851b9bb"} Feb 02 17:28:48 crc kubenswrapper[4858]: I0202 17:28:48.949715 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-z7cp9" Feb 02 17:28:48 crc kubenswrapper[4858]: I0202 17:28:48.950837 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-4jd6l" event={"ID":"366ee9f4-9c6e-416a-8603-f6bac0530a6a","Type":"ContainerStarted","Data":"d0637cf432dbae1fb4a22f819c64ba4a2fea10c40c39729a70ae839003fdac83"} Feb 02 17:28:48 crc kubenswrapper[4858]: I0202 17:28:48.950988 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-4jd6l" Feb 02 17:28:48 crc kubenswrapper[4858]: I0202 17:28:48.968359 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-rl5z2" podStartSLOduration=6.390398975 podStartE2EDuration="24.968342315s" podCreationTimestamp="2026-02-02 17:28:24 +0000 UTC" firstStartedPulling="2026-02-02 17:28:25.7347806 +0000 UTC m=+806.887195865" lastFinishedPulling="2026-02-02 17:28:44.31272394 +0000 UTC m=+825.465139205" observedRunningTime="2026-02-02 17:28:48.964024969 +0000 UTC m=+830.116440234" watchObservedRunningTime="2026-02-02 17:28:48.968342315 +0000 UTC m=+830.120757580" Feb 02 17:28:48 crc kubenswrapper[4858]: I0202 17:28:48.982873 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-4jd6l" podStartSLOduration=2.785518818 podStartE2EDuration="24.982858429s" podCreationTimestamp="2026-02-02 17:28:24 +0000 UTC" firstStartedPulling="2026-02-02 17:28:25.610244757 +0000 UTC m=+806.762660022" lastFinishedPulling="2026-02-02 17:28:47.807584368 +0000 UTC m=+828.959999633" observedRunningTime="2026-02-02 17:28:48.979537762 +0000 UTC m=+830.131953027" watchObservedRunningTime="2026-02-02 17:28:48.982858429 +0000 UTC m=+830.135273704" Feb 02 17:28:49 crc kubenswrapper[4858]: I0202 17:28:49.016730 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-ck77w" podStartSLOduration=20.498884221 podStartE2EDuration="26.016714096s" podCreationTimestamp="2026-02-02 17:28:23 +0000 UTC" firstStartedPulling="2026-02-02 17:28:40.39167031 +0000 UTC m=+821.544085575" lastFinishedPulling="2026-02-02 17:28:45.909500155 +0000 UTC m=+827.061915450" observedRunningTime="2026-02-02 17:28:49.001119912 +0000 UTC m=+830.153535197" watchObservedRunningTime="2026-02-02 17:28:49.016714096 +0000 UTC m=+830.169129361" Feb 02 17:28:49 crc kubenswrapper[4858]: I0202 17:28:49.033702 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-99rfw" podStartSLOduration=3.786099334 podStartE2EDuration="26.033683981s" podCreationTimestamp="2026-02-02 17:28:23 +0000 UTC" firstStartedPulling="2026-02-02 17:28:25.608232388 +0000 UTC m=+806.760647653" lastFinishedPulling="2026-02-02 17:28:47.855817025 +0000 UTC m=+829.008232300" observedRunningTime="2026-02-02 17:28:49.029779067 +0000 UTC m=+830.182194352" watchObservedRunningTime="2026-02-02 17:28:49.033683981 +0000 UTC m=+830.186099256" Feb 02 17:28:49 crc kubenswrapper[4858]: I0202 17:28:49.053901 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-z7cp9" podStartSLOduration=2.911853763 podStartE2EDuration="25.053884731s" podCreationTimestamp="2026-02-02 17:28:24 +0000 UTC" firstStartedPulling="2026-02-02 17:28:25.713860769 +0000 UTC m=+806.866276034" lastFinishedPulling="2026-02-02 17:28:47.855891697 +0000 UTC m=+829.008307002" observedRunningTime="2026-02-02 17:28:49.050462701 +0000 UTC m=+830.202877966" watchObservedRunningTime="2026-02-02 17:28:49.053884731 +0000 UTC m=+830.206299996" Feb 02 17:28:49 crc kubenswrapper[4858]: I0202 17:28:49.080217 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dsvmkf" podStartSLOduration=18.239813667 podStartE2EDuration="25.080199118s" podCreationTimestamp="2026-02-02 17:28:24 +0000 UTC" firstStartedPulling="2026-02-02 17:28:41.014531598 +0000 UTC m=+822.166946863" lastFinishedPulling="2026-02-02 17:28:47.854917049 +0000 UTC m=+829.007332314" observedRunningTime="2026-02-02 17:28:49.073780611 +0000 UTC m=+830.226195886" watchObservedRunningTime="2026-02-02 17:28:49.080199118 +0000 UTC m=+830.232614383" Feb 02 17:28:49 crc kubenswrapper[4858]: I0202 17:28:49.098783 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-tfvcz" podStartSLOduration=5.734036951 podStartE2EDuration="25.09876467s" podCreationTimestamp="2026-02-02 17:28:24 +0000 UTC" firstStartedPulling="2026-02-02 17:28:25.624455742 +0000 UTC m=+806.776871007" lastFinishedPulling="2026-02-02 17:28:44.989183461 +0000 UTC m=+826.141598726" observedRunningTime="2026-02-02 17:28:49.093572038 +0000 UTC m=+830.245987303" watchObservedRunningTime="2026-02-02 17:28:49.09876467 +0000 UTC m=+830.251179955" Feb 02 17:28:49 crc kubenswrapper[4858]: I0202 17:28:49.143811 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-8lqf9" podStartSLOduration=2.971070699 podStartE2EDuration="25.143797493s" podCreationTimestamp="2026-02-02 17:28:24 +0000 UTC" firstStartedPulling="2026-02-02 17:28:25.737586501 +0000 UTC m=+806.890001766" lastFinishedPulling="2026-02-02 17:28:47.910313285 +0000 UTC m=+829.062728560" observedRunningTime="2026-02-02 17:28:49.116422645 +0000 UTC m=+830.268837910" watchObservedRunningTime="2026-02-02 17:28:49.143797493 +0000 UTC m=+830.296212758" Feb 02 17:28:49 crc kubenswrapper[4858]: I0202 17:28:49.145064 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-8r9sc" podStartSLOduration=2.868372114 podStartE2EDuration="25.14505796s" podCreationTimestamp="2026-02-02 17:28:24 +0000 UTC" firstStartedPulling="2026-02-02 17:28:25.633601268 +0000 UTC m=+806.786016533" lastFinishedPulling="2026-02-02 17:28:47.910287114 +0000 UTC m=+829.062702379" observedRunningTime="2026-02-02 17:28:49.140166847 +0000 UTC m=+830.292582122" watchObservedRunningTime="2026-02-02 17:28:49.14505796 +0000 UTC m=+830.297473225" Feb 02 17:28:54 crc kubenswrapper[4858]: I0202 17:28:54.535545 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-99rfw" Feb 02 17:28:54 crc kubenswrapper[4858]: I0202 17:28:54.707892 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-tfvcz" Feb 02 17:28:54 crc kubenswrapper[4858]: I0202 17:28:54.784877 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-8r9sc" Feb 02 17:28:54 crc kubenswrapper[4858]: I0202 17:28:54.984111 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-4jd6l" Feb 02 17:28:55 crc kubenswrapper[4858]: I0202 17:28:55.029846 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-z7cp9" Feb 02 17:28:55 crc kubenswrapper[4858]: I0202 17:28:55.035965 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-rl5z2" Feb 02 17:28:56 crc kubenswrapper[4858]: I0202 17:28:56.685016 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad13cd52-7254-489a-8960-511bbc2a3360-metrics-certs\") pod \"openstack-operator-controller-manager-86df59f79f-rczsp\" (UID: \"ad13cd52-7254-489a-8960-511bbc2a3360\") " pod="openstack-operators/openstack-operator-controller-manager-86df59f79f-rczsp" Feb 02 17:28:56 crc kubenswrapper[4858]: I0202 17:28:56.685107 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ad13cd52-7254-489a-8960-511bbc2a3360-webhook-certs\") pod \"openstack-operator-controller-manager-86df59f79f-rczsp\" (UID: \"ad13cd52-7254-489a-8960-511bbc2a3360\") " pod="openstack-operators/openstack-operator-controller-manager-86df59f79f-rczsp" Feb 02 17:28:56 crc kubenswrapper[4858]: I0202 17:28:56.691771 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad13cd52-7254-489a-8960-511bbc2a3360-metrics-certs\") pod \"openstack-operator-controller-manager-86df59f79f-rczsp\" (UID: \"ad13cd52-7254-489a-8960-511bbc2a3360\") " pod="openstack-operators/openstack-operator-controller-manager-86df59f79f-rczsp" Feb 02 17:28:56 crc kubenswrapper[4858]: I0202 17:28:56.695796 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ad13cd52-7254-489a-8960-511bbc2a3360-webhook-certs\") pod \"openstack-operator-controller-manager-86df59f79f-rczsp\" (UID: \"ad13cd52-7254-489a-8960-511bbc2a3360\") " pod="openstack-operators/openstack-operator-controller-manager-86df59f79f-rczsp" Feb 02 17:28:56 crc kubenswrapper[4858]: I0202 17:28:56.848914 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-86df59f79f-rczsp" Feb 02 17:28:57 crc kubenswrapper[4858]: I0202 17:28:57.135344 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-86df59f79f-rczsp"] Feb 02 17:28:57 crc kubenswrapper[4858]: I0202 17:28:57.808288 4858 patch_prober.go:28] interesting pod/machine-config-daemon-lbvl2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 17:28:57 crc kubenswrapper[4858]: I0202 17:28:57.808702 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 17:28:57 crc kubenswrapper[4858]: I0202 17:28:57.808767 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" Feb 02 17:28:57 crc kubenswrapper[4858]: I0202 17:28:57.809376 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a4515f303cdc3d4371d56381b323ae5d013576ca1083c363dcab5d75f03e2725"} pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 17:28:57 crc kubenswrapper[4858]: I0202 17:28:57.809454 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" containerID="cri-o://a4515f303cdc3d4371d56381b323ae5d013576ca1083c363dcab5d75f03e2725" gracePeriod=600 Feb 02 17:28:58 crc kubenswrapper[4858]: I0202 17:28:58.019944 4858 generic.go:334] "Generic (PLEG): container finished" podID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerID="a4515f303cdc3d4371d56381b323ae5d013576ca1083c363dcab5d75f03e2725" exitCode=0 Feb 02 17:28:58 crc kubenswrapper[4858]: I0202 17:28:58.020003 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" event={"ID":"d03a4872-ca6a-4233-bdbf-b31f7890dc3e","Type":"ContainerDied","Data":"a4515f303cdc3d4371d56381b323ae5d013576ca1083c363dcab5d75f03e2725"} Feb 02 17:28:58 crc kubenswrapper[4858]: I0202 17:28:58.020047 4858 scope.go:117] "RemoveContainer" containerID="bb53defa249b6a080019d6db0213995becaf964ff75fe4b36f783c31a6f70e41" Feb 02 17:28:58 crc kubenswrapper[4858]: I0202 17:28:58.021366 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-86df59f79f-rczsp" event={"ID":"ad13cd52-7254-489a-8960-511bbc2a3360","Type":"ContainerStarted","Data":"3de244794cccfaded78fc902c396ef422bcdb8d1afd7985a7b227d3492c12647"} Feb 02 17:28:59 crc kubenswrapper[4858]: I0202 17:28:59.898013 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-ck77w" Feb 02 17:29:00 crc kubenswrapper[4858]: I0202 17:29:00.539445 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dsvmkf" Feb 02 17:29:04 crc kubenswrapper[4858]: I0202 17:29:04.067650 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-86df59f79f-rczsp" event={"ID":"ad13cd52-7254-489a-8960-511bbc2a3360","Type":"ContainerStarted","Data":"518a5668d64e096ff3b36eadc60495bc91fcc1953fec56b05e0650a407c1ef1a"} Feb 02 17:29:04 crc kubenswrapper[4858]: I0202 17:29:04.068049 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-86df59f79f-rczsp" Feb 02 17:29:04 crc kubenswrapper[4858]: I0202 17:29:04.099032 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-86df59f79f-rczsp" podStartSLOduration=40.099008728 podStartE2EDuration="40.099008728s" podCreationTimestamp="2026-02-02 17:28:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:29:04.094560853 +0000 UTC m=+845.246976118" watchObservedRunningTime="2026-02-02 17:29:04.099008728 +0000 UTC m=+845.251424023" Feb 02 17:29:05 crc kubenswrapper[4858]: I0202 17:29:05.077361 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" event={"ID":"d03a4872-ca6a-4233-bdbf-b31f7890dc3e","Type":"ContainerStarted","Data":"b38cf52a6ef125bca2bfc0fb953106251c191f34dbe401ffe7c0fa9cbe521a8f"} Feb 02 17:29:16 crc kubenswrapper[4858]: I0202 17:29:16.858135 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-86df59f79f-rczsp" Feb 02 17:29:32 crc kubenswrapper[4858]: I0202 17:29:32.615437 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-q4f9j"] Feb 02 17:29:32 crc kubenswrapper[4858]: I0202 17:29:32.617409 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-q4f9j" Feb 02 17:29:32 crc kubenswrapper[4858]: I0202 17:29:32.619076 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 02 17:29:32 crc kubenswrapper[4858]: I0202 17:29:32.619880 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-wl92f" Feb 02 17:29:32 crc kubenswrapper[4858]: I0202 17:29:32.622209 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 02 17:29:32 crc kubenswrapper[4858]: I0202 17:29:32.622811 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 02 17:29:32 crc kubenswrapper[4858]: I0202 17:29:32.628827 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-q4f9j"] Feb 02 17:29:32 crc kubenswrapper[4858]: I0202 17:29:32.674209 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-w6mdx"] Feb 02 17:29:32 crc kubenswrapper[4858]: I0202 17:29:32.675271 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-w6mdx" Feb 02 17:29:32 crc kubenswrapper[4858]: I0202 17:29:32.677811 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 02 17:29:32 crc kubenswrapper[4858]: I0202 17:29:32.689472 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-w6mdx"] Feb 02 17:29:32 crc kubenswrapper[4858]: I0202 17:29:32.735912 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db5cf574-1e7f-4bbd-8d06-b890e82bae03-config\") pod \"dnsmasq-dns-675f4bcbfc-q4f9j\" (UID: \"db5cf574-1e7f-4bbd-8d06-b890e82bae03\") " pod="openstack/dnsmasq-dns-675f4bcbfc-q4f9j" Feb 02 17:29:32 crc kubenswrapper[4858]: I0202 17:29:32.736120 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxk6n\" (UniqueName: \"kubernetes.io/projected/db5cf574-1e7f-4bbd-8d06-b890e82bae03-kube-api-access-wxk6n\") pod \"dnsmasq-dns-675f4bcbfc-q4f9j\" (UID: \"db5cf574-1e7f-4bbd-8d06-b890e82bae03\") " pod="openstack/dnsmasq-dns-675f4bcbfc-q4f9j" Feb 02 17:29:32 crc kubenswrapper[4858]: I0202 17:29:32.837722 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxk6n\" (UniqueName: \"kubernetes.io/projected/db5cf574-1e7f-4bbd-8d06-b890e82bae03-kube-api-access-wxk6n\") pod \"dnsmasq-dns-675f4bcbfc-q4f9j\" (UID: \"db5cf574-1e7f-4bbd-8d06-b890e82bae03\") " pod="openstack/dnsmasq-dns-675f4bcbfc-q4f9j" Feb 02 17:29:32 crc kubenswrapper[4858]: I0202 17:29:32.837823 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff5d9e29-80ad-4627-ba54-0fd7bf4e84de-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-w6mdx\" (UID: \"ff5d9e29-80ad-4627-ba54-0fd7bf4e84de\") " pod="openstack/dnsmasq-dns-78dd6ddcc-w6mdx" Feb 02 17:29:32 crc kubenswrapper[4858]: I0202 17:29:32.837968 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5d9e29-80ad-4627-ba54-0fd7bf4e84de-config\") pod \"dnsmasq-dns-78dd6ddcc-w6mdx\" (UID: \"ff5d9e29-80ad-4627-ba54-0fd7bf4e84de\") " pod="openstack/dnsmasq-dns-78dd6ddcc-w6mdx" Feb 02 17:29:32 crc kubenswrapper[4858]: I0202 17:29:32.838035 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db5cf574-1e7f-4bbd-8d06-b890e82bae03-config\") pod \"dnsmasq-dns-675f4bcbfc-q4f9j\" (UID: \"db5cf574-1e7f-4bbd-8d06-b890e82bae03\") " pod="openstack/dnsmasq-dns-675f4bcbfc-q4f9j" Feb 02 17:29:32 crc kubenswrapper[4858]: I0202 17:29:32.838149 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5rxt\" (UniqueName: \"kubernetes.io/projected/ff5d9e29-80ad-4627-ba54-0fd7bf4e84de-kube-api-access-c5rxt\") pod \"dnsmasq-dns-78dd6ddcc-w6mdx\" (UID: \"ff5d9e29-80ad-4627-ba54-0fd7bf4e84de\") " pod="openstack/dnsmasq-dns-78dd6ddcc-w6mdx" Feb 02 17:29:32 crc kubenswrapper[4858]: I0202 17:29:32.839560 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db5cf574-1e7f-4bbd-8d06-b890e82bae03-config\") pod \"dnsmasq-dns-675f4bcbfc-q4f9j\" (UID: \"db5cf574-1e7f-4bbd-8d06-b890e82bae03\") " pod="openstack/dnsmasq-dns-675f4bcbfc-q4f9j" Feb 02 17:29:32 crc kubenswrapper[4858]: I0202 17:29:32.859253 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxk6n\" (UniqueName: \"kubernetes.io/projected/db5cf574-1e7f-4bbd-8d06-b890e82bae03-kube-api-access-wxk6n\") pod \"dnsmasq-dns-675f4bcbfc-q4f9j\" (UID: \"db5cf574-1e7f-4bbd-8d06-b890e82bae03\") " pod="openstack/dnsmasq-dns-675f4bcbfc-q4f9j" Feb 02 17:29:32 crc kubenswrapper[4858]: I0202 17:29:32.935385 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-q4f9j" Feb 02 17:29:32 crc kubenswrapper[4858]: I0202 17:29:32.939328 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff5d9e29-80ad-4627-ba54-0fd7bf4e84de-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-w6mdx\" (UID: \"ff5d9e29-80ad-4627-ba54-0fd7bf4e84de\") " pod="openstack/dnsmasq-dns-78dd6ddcc-w6mdx" Feb 02 17:29:32 crc kubenswrapper[4858]: I0202 17:29:32.939594 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5d9e29-80ad-4627-ba54-0fd7bf4e84de-config\") pod \"dnsmasq-dns-78dd6ddcc-w6mdx\" (UID: \"ff5d9e29-80ad-4627-ba54-0fd7bf4e84de\") " pod="openstack/dnsmasq-dns-78dd6ddcc-w6mdx" Feb 02 17:29:32 crc kubenswrapper[4858]: I0202 17:29:32.939746 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5rxt\" (UniqueName: \"kubernetes.io/projected/ff5d9e29-80ad-4627-ba54-0fd7bf4e84de-kube-api-access-c5rxt\") pod \"dnsmasq-dns-78dd6ddcc-w6mdx\" (UID: \"ff5d9e29-80ad-4627-ba54-0fd7bf4e84de\") " pod="openstack/dnsmasq-dns-78dd6ddcc-w6mdx" Feb 02 17:29:32 crc kubenswrapper[4858]: I0202 17:29:32.940237 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff5d9e29-80ad-4627-ba54-0fd7bf4e84de-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-w6mdx\" (UID: \"ff5d9e29-80ad-4627-ba54-0fd7bf4e84de\") " pod="openstack/dnsmasq-dns-78dd6ddcc-w6mdx" Feb 02 17:29:32 crc kubenswrapper[4858]: I0202 17:29:32.940643 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5d9e29-80ad-4627-ba54-0fd7bf4e84de-config\") pod \"dnsmasq-dns-78dd6ddcc-w6mdx\" (UID: \"ff5d9e29-80ad-4627-ba54-0fd7bf4e84de\") " pod="openstack/dnsmasq-dns-78dd6ddcc-w6mdx" Feb 02 17:29:32 crc kubenswrapper[4858]: I0202 17:29:32.969097 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5rxt\" (UniqueName: \"kubernetes.io/projected/ff5d9e29-80ad-4627-ba54-0fd7bf4e84de-kube-api-access-c5rxt\") pod \"dnsmasq-dns-78dd6ddcc-w6mdx\" (UID: \"ff5d9e29-80ad-4627-ba54-0fd7bf4e84de\") " pod="openstack/dnsmasq-dns-78dd6ddcc-w6mdx" Feb 02 17:29:32 crc kubenswrapper[4858]: I0202 17:29:32.992473 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-w6mdx" Feb 02 17:29:33 crc kubenswrapper[4858]: I0202 17:29:33.375810 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-q4f9j"] Feb 02 17:29:33 crc kubenswrapper[4858]: W0202 17:29:33.384634 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb5cf574_1e7f_4bbd_8d06_b890e82bae03.slice/crio-fb7dc5618330448d76ae7645168966a0cffd822e0cc1189288cfc9bf1d591b4e WatchSource:0}: Error finding container fb7dc5618330448d76ae7645168966a0cffd822e0cc1189288cfc9bf1d591b4e: Status 404 returned error can't find the container with id fb7dc5618330448d76ae7645168966a0cffd822e0cc1189288cfc9bf1d591b4e Feb 02 17:29:33 crc kubenswrapper[4858]: I0202 17:29:33.448607 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-w6mdx"] Feb 02 17:29:33 crc kubenswrapper[4858]: W0202 17:29:33.449947 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff5d9e29_80ad_4627_ba54_0fd7bf4e84de.slice/crio-16f4c2675096c67a7f101bce4adedee9d21a108ec46b9e9fa2bcb2271d09b9a0 WatchSource:0}: Error finding container 16f4c2675096c67a7f101bce4adedee9d21a108ec46b9e9fa2bcb2271d09b9a0: Status 404 returned error can't find the container with id 16f4c2675096c67a7f101bce4adedee9d21a108ec46b9e9fa2bcb2271d09b9a0 Feb 02 17:29:34 crc kubenswrapper[4858]: I0202 17:29:34.289936 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-q4f9j" event={"ID":"db5cf574-1e7f-4bbd-8d06-b890e82bae03","Type":"ContainerStarted","Data":"fb7dc5618330448d76ae7645168966a0cffd822e0cc1189288cfc9bf1d591b4e"} Feb 02 17:29:34 crc kubenswrapper[4858]: I0202 17:29:34.292340 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-w6mdx" event={"ID":"ff5d9e29-80ad-4627-ba54-0fd7bf4e84de","Type":"ContainerStarted","Data":"16f4c2675096c67a7f101bce4adedee9d21a108ec46b9e9fa2bcb2271d09b9a0"} Feb 02 17:29:35 crc kubenswrapper[4858]: I0202 17:29:35.205235 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-q4f9j"] Feb 02 17:29:35 crc kubenswrapper[4858]: I0202 17:29:35.228347 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-s5lks"] Feb 02 17:29:35 crc kubenswrapper[4858]: I0202 17:29:35.229457 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-s5lks" Feb 02 17:29:35 crc kubenswrapper[4858]: I0202 17:29:35.235304 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-s5lks"] Feb 02 17:29:35 crc kubenswrapper[4858]: I0202 17:29:35.376989 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qcvt\" (UniqueName: \"kubernetes.io/projected/5a4f5119-0d30-4fa2-87c3-55aa74010bec-kube-api-access-7qcvt\") pod \"dnsmasq-dns-666b6646f7-s5lks\" (UID: \"5a4f5119-0d30-4fa2-87c3-55aa74010bec\") " pod="openstack/dnsmasq-dns-666b6646f7-s5lks" Feb 02 17:29:35 crc kubenswrapper[4858]: I0202 17:29:35.377096 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a4f5119-0d30-4fa2-87c3-55aa74010bec-config\") pod \"dnsmasq-dns-666b6646f7-s5lks\" (UID: \"5a4f5119-0d30-4fa2-87c3-55aa74010bec\") " pod="openstack/dnsmasq-dns-666b6646f7-s5lks" Feb 02 17:29:35 crc kubenswrapper[4858]: I0202 17:29:35.377154 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a4f5119-0d30-4fa2-87c3-55aa74010bec-dns-svc\") pod \"dnsmasq-dns-666b6646f7-s5lks\" (UID: \"5a4f5119-0d30-4fa2-87c3-55aa74010bec\") " pod="openstack/dnsmasq-dns-666b6646f7-s5lks" Feb 02 17:29:35 crc kubenswrapper[4858]: I0202 17:29:35.478020 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qcvt\" (UniqueName: \"kubernetes.io/projected/5a4f5119-0d30-4fa2-87c3-55aa74010bec-kube-api-access-7qcvt\") pod \"dnsmasq-dns-666b6646f7-s5lks\" (UID: \"5a4f5119-0d30-4fa2-87c3-55aa74010bec\") " pod="openstack/dnsmasq-dns-666b6646f7-s5lks" Feb 02 17:29:35 crc kubenswrapper[4858]: I0202 17:29:35.478094 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a4f5119-0d30-4fa2-87c3-55aa74010bec-config\") pod \"dnsmasq-dns-666b6646f7-s5lks\" (UID: \"5a4f5119-0d30-4fa2-87c3-55aa74010bec\") " pod="openstack/dnsmasq-dns-666b6646f7-s5lks" Feb 02 17:29:35 crc kubenswrapper[4858]: I0202 17:29:35.478153 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a4f5119-0d30-4fa2-87c3-55aa74010bec-dns-svc\") pod \"dnsmasq-dns-666b6646f7-s5lks\" (UID: \"5a4f5119-0d30-4fa2-87c3-55aa74010bec\") " pod="openstack/dnsmasq-dns-666b6646f7-s5lks" Feb 02 17:29:35 crc kubenswrapper[4858]: I0202 17:29:35.479177 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a4f5119-0d30-4fa2-87c3-55aa74010bec-dns-svc\") pod \"dnsmasq-dns-666b6646f7-s5lks\" (UID: \"5a4f5119-0d30-4fa2-87c3-55aa74010bec\") " pod="openstack/dnsmasq-dns-666b6646f7-s5lks" Feb 02 17:29:35 crc kubenswrapper[4858]: I0202 17:29:35.479555 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a4f5119-0d30-4fa2-87c3-55aa74010bec-config\") pod \"dnsmasq-dns-666b6646f7-s5lks\" (UID: \"5a4f5119-0d30-4fa2-87c3-55aa74010bec\") " pod="openstack/dnsmasq-dns-666b6646f7-s5lks" Feb 02 17:29:35 crc kubenswrapper[4858]: I0202 17:29:35.527268 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qcvt\" (UniqueName: \"kubernetes.io/projected/5a4f5119-0d30-4fa2-87c3-55aa74010bec-kube-api-access-7qcvt\") pod \"dnsmasq-dns-666b6646f7-s5lks\" (UID: \"5a4f5119-0d30-4fa2-87c3-55aa74010bec\") " pod="openstack/dnsmasq-dns-666b6646f7-s5lks" Feb 02 17:29:35 crc kubenswrapper[4858]: I0202 17:29:35.595249 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-s5lks" Feb 02 17:29:35 crc kubenswrapper[4858]: I0202 17:29:35.619181 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-w6mdx"] Feb 02 17:29:35 crc kubenswrapper[4858]: I0202 17:29:35.650656 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4sv2z"] Feb 02 17:29:35 crc kubenswrapper[4858]: I0202 17:29:35.652195 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-4sv2z" Feb 02 17:29:35 crc kubenswrapper[4858]: I0202 17:29:35.673531 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4sv2z"] Feb 02 17:29:35 crc kubenswrapper[4858]: I0202 17:29:35.684344 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g6bk\" (UniqueName: \"kubernetes.io/projected/8094eb3b-7f98-407b-8e5d-551ef561716b-kube-api-access-9g6bk\") pod \"dnsmasq-dns-57d769cc4f-4sv2z\" (UID: \"8094eb3b-7f98-407b-8e5d-551ef561716b\") " pod="openstack/dnsmasq-dns-57d769cc4f-4sv2z" Feb 02 17:29:35 crc kubenswrapper[4858]: I0202 17:29:35.684390 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8094eb3b-7f98-407b-8e5d-551ef561716b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-4sv2z\" (UID: \"8094eb3b-7f98-407b-8e5d-551ef561716b\") " pod="openstack/dnsmasq-dns-57d769cc4f-4sv2z" Feb 02 17:29:35 crc kubenswrapper[4858]: I0202 17:29:35.684419 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8094eb3b-7f98-407b-8e5d-551ef561716b-config\") pod \"dnsmasq-dns-57d769cc4f-4sv2z\" (UID: \"8094eb3b-7f98-407b-8e5d-551ef561716b\") " pod="openstack/dnsmasq-dns-57d769cc4f-4sv2z" Feb 02 17:29:35 crc kubenswrapper[4858]: I0202 17:29:35.786345 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g6bk\" (UniqueName: \"kubernetes.io/projected/8094eb3b-7f98-407b-8e5d-551ef561716b-kube-api-access-9g6bk\") pod \"dnsmasq-dns-57d769cc4f-4sv2z\" (UID: \"8094eb3b-7f98-407b-8e5d-551ef561716b\") " pod="openstack/dnsmasq-dns-57d769cc4f-4sv2z" Feb 02 17:29:35 crc kubenswrapper[4858]: I0202 17:29:35.786410 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8094eb3b-7f98-407b-8e5d-551ef561716b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-4sv2z\" (UID: \"8094eb3b-7f98-407b-8e5d-551ef561716b\") " pod="openstack/dnsmasq-dns-57d769cc4f-4sv2z" Feb 02 17:29:35 crc kubenswrapper[4858]: I0202 17:29:35.786475 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8094eb3b-7f98-407b-8e5d-551ef561716b-config\") pod \"dnsmasq-dns-57d769cc4f-4sv2z\" (UID: \"8094eb3b-7f98-407b-8e5d-551ef561716b\") " pod="openstack/dnsmasq-dns-57d769cc4f-4sv2z" Feb 02 17:29:35 crc kubenswrapper[4858]: I0202 17:29:35.787628 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8094eb3b-7f98-407b-8e5d-551ef561716b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-4sv2z\" (UID: \"8094eb3b-7f98-407b-8e5d-551ef561716b\") " pod="openstack/dnsmasq-dns-57d769cc4f-4sv2z" Feb 02 17:29:35 crc kubenswrapper[4858]: I0202 17:29:35.788810 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8094eb3b-7f98-407b-8e5d-551ef561716b-config\") pod \"dnsmasq-dns-57d769cc4f-4sv2z\" (UID: \"8094eb3b-7f98-407b-8e5d-551ef561716b\") " pod="openstack/dnsmasq-dns-57d769cc4f-4sv2z" Feb 02 17:29:35 crc kubenswrapper[4858]: I0202 17:29:35.809762 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g6bk\" (UniqueName: \"kubernetes.io/projected/8094eb3b-7f98-407b-8e5d-551ef561716b-kube-api-access-9g6bk\") pod \"dnsmasq-dns-57d769cc4f-4sv2z\" (UID: \"8094eb3b-7f98-407b-8e5d-551ef561716b\") " pod="openstack/dnsmasq-dns-57d769cc4f-4sv2z" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.032283 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-4sv2z" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.155480 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-s5lks"] Feb 02 17:29:36 crc kubenswrapper[4858]: W0202 17:29:36.190465 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a4f5119_0d30_4fa2_87c3_55aa74010bec.slice/crio-98d16a965008f799a96fc98af47f6f82503cd38e2ba57d316a02f59681851cd5 WatchSource:0}: Error finding container 98d16a965008f799a96fc98af47f6f82503cd38e2ba57d316a02f59681851cd5: Status 404 returned error can't find the container with id 98d16a965008f799a96fc98af47f6f82503cd38e2ba57d316a02f59681851cd5 Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.197727 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.315761 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-s5lks" event={"ID":"5a4f5119-0d30-4fa2-87c3-55aa74010bec","Type":"ContainerStarted","Data":"98d16a965008f799a96fc98af47f6f82503cd38e2ba57d316a02f59681851cd5"} Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.412521 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.413774 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.415557 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.416000 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.416820 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.417245 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.418691 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.419422 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.419470 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.420240 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-wjk9v" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.503729 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4sv2z"] Feb 02 17:29:36 crc kubenswrapper[4858]: W0202 17:29:36.513104 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8094eb3b_7f98_407b_8e5d_551ef561716b.slice/crio-a8fee14c5d8b94b0131e0c5d32de892bf4a6af442d4ce6727e25a3ba76f4bab9 WatchSource:0}: Error finding container a8fee14c5d8b94b0131e0c5d32de892bf4a6af442d4ce6727e25a3ba76f4bab9: Status 404 returned error can't find the container with id a8fee14c5d8b94b0131e0c5d32de892bf4a6af442d4ce6727e25a3ba76f4bab9 Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.600124 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/55d221f1-91f9-4045-b94b-95facb25b3dc-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") " pod="openstack/rabbitmq-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.600209 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/55d221f1-91f9-4045-b94b-95facb25b3dc-server-conf\") pod \"rabbitmq-server-0\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") " pod="openstack/rabbitmq-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.600405 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/55d221f1-91f9-4045-b94b-95facb25b3dc-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") " pod="openstack/rabbitmq-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.601451 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/55d221f1-91f9-4045-b94b-95facb25b3dc-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") " pod="openstack/rabbitmq-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.601500 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") " pod="openstack/rabbitmq-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.601541 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/55d221f1-91f9-4045-b94b-95facb25b3dc-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") " pod="openstack/rabbitmq-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.601576 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/55d221f1-91f9-4045-b94b-95facb25b3dc-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") " pod="openstack/rabbitmq-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.601616 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dcjm\" (UniqueName: \"kubernetes.io/projected/55d221f1-91f9-4045-b94b-95facb25b3dc-kube-api-access-6dcjm\") pod \"rabbitmq-server-0\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") " pod="openstack/rabbitmq-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.601635 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/55d221f1-91f9-4045-b94b-95facb25b3dc-pod-info\") pod \"rabbitmq-server-0\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") " pod="openstack/rabbitmq-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.601661 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/55d221f1-91f9-4045-b94b-95facb25b3dc-config-data\") pod \"rabbitmq-server-0\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") " pod="openstack/rabbitmq-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.601712 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/55d221f1-91f9-4045-b94b-95facb25b3dc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") " pod="openstack/rabbitmq-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.705684 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/55d221f1-91f9-4045-b94b-95facb25b3dc-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") " pod="openstack/rabbitmq-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.705751 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/55d221f1-91f9-4045-b94b-95facb25b3dc-server-conf\") pod \"rabbitmq-server-0\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") " pod="openstack/rabbitmq-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.705799 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/55d221f1-91f9-4045-b94b-95facb25b3dc-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") " pod="openstack/rabbitmq-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.705830 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/55d221f1-91f9-4045-b94b-95facb25b3dc-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") " pod="openstack/rabbitmq-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.705862 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") " pod="openstack/rabbitmq-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.705907 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/55d221f1-91f9-4045-b94b-95facb25b3dc-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") " pod="openstack/rabbitmq-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.705934 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/55d221f1-91f9-4045-b94b-95facb25b3dc-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") " pod="openstack/rabbitmq-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.707246 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dcjm\" (UniqueName: \"kubernetes.io/projected/55d221f1-91f9-4045-b94b-95facb25b3dc-kube-api-access-6dcjm\") pod \"rabbitmq-server-0\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") " pod="openstack/rabbitmq-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.707190 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.707360 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/55d221f1-91f9-4045-b94b-95facb25b3dc-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") " pod="openstack/rabbitmq-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.707517 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/55d221f1-91f9-4045-b94b-95facb25b3dc-server-conf\") pod \"rabbitmq-server-0\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") " pod="openstack/rabbitmq-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.707944 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/55d221f1-91f9-4045-b94b-95facb25b3dc-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") " pod="openstack/rabbitmq-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.711563 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/55d221f1-91f9-4045-b94b-95facb25b3dc-pod-info\") pod \"rabbitmq-server-0\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") " pod="openstack/rabbitmq-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.711627 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/55d221f1-91f9-4045-b94b-95facb25b3dc-config-data\") pod \"rabbitmq-server-0\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") " pod="openstack/rabbitmq-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.711715 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/55d221f1-91f9-4045-b94b-95facb25b3dc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") " pod="openstack/rabbitmq-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.712939 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/55d221f1-91f9-4045-b94b-95facb25b3dc-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") " pod="openstack/rabbitmq-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.713458 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/55d221f1-91f9-4045-b94b-95facb25b3dc-config-data\") pod \"rabbitmq-server-0\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") " pod="openstack/rabbitmq-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.716230 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/55d221f1-91f9-4045-b94b-95facb25b3dc-pod-info\") pod \"rabbitmq-server-0\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") " pod="openstack/rabbitmq-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.716256 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/55d221f1-91f9-4045-b94b-95facb25b3dc-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") " pod="openstack/rabbitmq-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.722693 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dcjm\" (UniqueName: \"kubernetes.io/projected/55d221f1-91f9-4045-b94b-95facb25b3dc-kube-api-access-6dcjm\") pod \"rabbitmq-server-0\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") " pod="openstack/rabbitmq-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.723168 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/55d221f1-91f9-4045-b94b-95facb25b3dc-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") " pod="openstack/rabbitmq-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.726066 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/55d221f1-91f9-4045-b94b-95facb25b3dc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") " pod="openstack/rabbitmq-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.736185 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") " pod="openstack/rabbitmq-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.768484 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.783364 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.785206 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.786911 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.787117 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.792210 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.792271 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.792372 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.792378 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.796773 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-dcr6q" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.841379 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.951023 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.951190 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.951298 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.951396 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.951431 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.951540 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.951573 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.951623 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.951694 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hkj9\" (UniqueName: \"kubernetes.io/projected/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-kube-api-access-2hkj9\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.951722 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:29:36 crc kubenswrapper[4858]: I0202 17:29:36.951758 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:29:37 crc kubenswrapper[4858]: I0202 17:29:37.053147 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:29:37 crc kubenswrapper[4858]: I0202 17:29:37.053209 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:29:37 crc kubenswrapper[4858]: I0202 17:29:37.053235 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:29:37 crc kubenswrapper[4858]: I0202 17:29:37.053278 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:29:37 crc kubenswrapper[4858]: I0202 17:29:37.053297 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:29:37 crc kubenswrapper[4858]: I0202 17:29:37.053319 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:29:37 crc kubenswrapper[4858]: I0202 17:29:37.053346 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hkj9\" (UniqueName: \"kubernetes.io/projected/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-kube-api-access-2hkj9\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:29:37 crc kubenswrapper[4858]: I0202 17:29:37.053369 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:29:37 crc kubenswrapper[4858]: I0202 17:29:37.053391 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:29:37 crc kubenswrapper[4858]: I0202 17:29:37.053429 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:29:37 crc kubenswrapper[4858]: I0202 17:29:37.053446 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:29:37 crc kubenswrapper[4858]: I0202 17:29:37.053700 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:29:37 crc kubenswrapper[4858]: I0202 17:29:37.054055 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:29:37 crc kubenswrapper[4858]: I0202 17:29:37.054084 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:29:37 crc kubenswrapper[4858]: I0202 17:29:37.054713 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:29:37 crc kubenswrapper[4858]: I0202 17:29:37.057682 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:29:37 crc kubenswrapper[4858]: I0202 17:29:37.058373 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:29:37 crc kubenswrapper[4858]: I0202 17:29:37.058992 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:29:37 crc kubenswrapper[4858]: I0202 17:29:37.061637 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:29:37 crc kubenswrapper[4858]: I0202 17:29:37.065705 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:29:37 crc kubenswrapper[4858]: I0202 17:29:37.070163 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:29:37 crc kubenswrapper[4858]: I0202 17:29:37.083815 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hkj9\" (UniqueName: \"kubernetes.io/projected/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-kube-api-access-2hkj9\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:29:37 crc kubenswrapper[4858]: I0202 17:29:37.085114 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:29:37 crc kubenswrapper[4858]: I0202 17:29:37.170184 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:29:37 crc kubenswrapper[4858]: I0202 17:29:37.325900 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-4sv2z" event={"ID":"8094eb3b-7f98-407b-8e5d-551ef561716b","Type":"ContainerStarted","Data":"a8fee14c5d8b94b0131e0c5d32de892bf4a6af442d4ce6727e25a3ba76f4bab9"} Feb 02 17:29:37 crc kubenswrapper[4858]: I0202 17:29:37.944880 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 02 17:29:37 crc kubenswrapper[4858]: I0202 17:29:37.946676 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 02 17:29:37 crc kubenswrapper[4858]: I0202 17:29:37.948515 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 02 17:29:37 crc kubenswrapper[4858]: I0202 17:29:37.949305 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 02 17:29:37 crc kubenswrapper[4858]: I0202 17:29:37.949635 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-vbgdf" Feb 02 17:29:37 crc kubenswrapper[4858]: I0202 17:29:37.949812 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 02 17:29:37 crc kubenswrapper[4858]: I0202 17:29:37.955703 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 02 17:29:37 crc kubenswrapper[4858]: I0202 17:29:37.959269 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 02 17:29:38 crc kubenswrapper[4858]: I0202 17:29:38.067214 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3a24f351-b5a8-444d-b67d-7b9635f5a8aa-config-data-default\") pod \"openstack-galera-0\" (UID: \"3a24f351-b5a8-444d-b67d-7b9635f5a8aa\") " pod="openstack/openstack-galera-0" Feb 02 17:29:38 crc kubenswrapper[4858]: I0202 17:29:38.067320 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a24f351-b5a8-444d-b67d-7b9635f5a8aa-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3a24f351-b5a8-444d-b67d-7b9635f5a8aa\") " pod="openstack/openstack-galera-0" Feb 02 17:29:38 crc kubenswrapper[4858]: I0202 17:29:38.067348 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a24f351-b5a8-444d-b67d-7b9635f5a8aa-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"3a24f351-b5a8-444d-b67d-7b9635f5a8aa\") " pod="openstack/openstack-galera-0" Feb 02 17:29:38 crc kubenswrapper[4858]: I0202 17:29:38.067366 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a24f351-b5a8-444d-b67d-7b9635f5a8aa-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"3a24f351-b5a8-444d-b67d-7b9635f5a8aa\") " pod="openstack/openstack-galera-0" Feb 02 17:29:38 crc kubenswrapper[4858]: I0202 17:29:38.067388 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k9sq\" (UniqueName: \"kubernetes.io/projected/3a24f351-b5a8-444d-b67d-7b9635f5a8aa-kube-api-access-5k9sq\") pod \"openstack-galera-0\" (UID: \"3a24f351-b5a8-444d-b67d-7b9635f5a8aa\") " pod="openstack/openstack-galera-0" Feb 02 17:29:38 crc kubenswrapper[4858]: I0202 17:29:38.067407 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3a24f351-b5a8-444d-b67d-7b9635f5a8aa-kolla-config\") pod \"openstack-galera-0\" (UID: \"3a24f351-b5a8-444d-b67d-7b9635f5a8aa\") " pod="openstack/openstack-galera-0" Feb 02 17:29:38 crc kubenswrapper[4858]: I0202 17:29:38.067430 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"3a24f351-b5a8-444d-b67d-7b9635f5a8aa\") " pod="openstack/openstack-galera-0" Feb 02 17:29:38 crc kubenswrapper[4858]: I0202 17:29:38.067484 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3a24f351-b5a8-444d-b67d-7b9635f5a8aa-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3a24f351-b5a8-444d-b67d-7b9635f5a8aa\") " pod="openstack/openstack-galera-0" Feb 02 17:29:38 crc kubenswrapper[4858]: I0202 17:29:38.169230 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3a24f351-b5a8-444d-b67d-7b9635f5a8aa-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3a24f351-b5a8-444d-b67d-7b9635f5a8aa\") " pod="openstack/openstack-galera-0" Feb 02 17:29:38 crc kubenswrapper[4858]: I0202 17:29:38.169283 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3a24f351-b5a8-444d-b67d-7b9635f5a8aa-config-data-default\") pod \"openstack-galera-0\" (UID: \"3a24f351-b5a8-444d-b67d-7b9635f5a8aa\") " pod="openstack/openstack-galera-0" Feb 02 17:29:38 crc kubenswrapper[4858]: I0202 17:29:38.169338 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a24f351-b5a8-444d-b67d-7b9635f5a8aa-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3a24f351-b5a8-444d-b67d-7b9635f5a8aa\") " pod="openstack/openstack-galera-0" Feb 02 17:29:38 crc kubenswrapper[4858]: I0202 17:29:38.169360 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a24f351-b5a8-444d-b67d-7b9635f5a8aa-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"3a24f351-b5a8-444d-b67d-7b9635f5a8aa\") " pod="openstack/openstack-galera-0" Feb 02 17:29:38 crc kubenswrapper[4858]: I0202 17:29:38.169377 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a24f351-b5a8-444d-b67d-7b9635f5a8aa-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"3a24f351-b5a8-444d-b67d-7b9635f5a8aa\") " pod="openstack/openstack-galera-0" Feb 02 17:29:38 crc kubenswrapper[4858]: I0202 17:29:38.169398 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k9sq\" (UniqueName: \"kubernetes.io/projected/3a24f351-b5a8-444d-b67d-7b9635f5a8aa-kube-api-access-5k9sq\") pod \"openstack-galera-0\" (UID: \"3a24f351-b5a8-444d-b67d-7b9635f5a8aa\") " pod="openstack/openstack-galera-0" Feb 02 17:29:38 crc kubenswrapper[4858]: I0202 17:29:38.169420 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3a24f351-b5a8-444d-b67d-7b9635f5a8aa-kolla-config\") pod \"openstack-galera-0\" (UID: \"3a24f351-b5a8-444d-b67d-7b9635f5a8aa\") " pod="openstack/openstack-galera-0" Feb 02 17:29:38 crc kubenswrapper[4858]: I0202 17:29:38.169470 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"3a24f351-b5a8-444d-b67d-7b9635f5a8aa\") " pod="openstack/openstack-galera-0" Feb 02 17:29:38 crc kubenswrapper[4858]: I0202 17:29:38.169804 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"3a24f351-b5a8-444d-b67d-7b9635f5a8aa\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Feb 02 17:29:38 crc kubenswrapper[4858]: I0202 17:29:38.169945 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3a24f351-b5a8-444d-b67d-7b9635f5a8aa-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3a24f351-b5a8-444d-b67d-7b9635f5a8aa\") " pod="openstack/openstack-galera-0" Feb 02 17:29:38 crc kubenswrapper[4858]: I0202 17:29:38.170615 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3a24f351-b5a8-444d-b67d-7b9635f5a8aa-config-data-default\") pod \"openstack-galera-0\" (UID: \"3a24f351-b5a8-444d-b67d-7b9635f5a8aa\") " pod="openstack/openstack-galera-0" Feb 02 17:29:38 crc kubenswrapper[4858]: I0202 17:29:38.170789 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a24f351-b5a8-444d-b67d-7b9635f5a8aa-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3a24f351-b5a8-444d-b67d-7b9635f5a8aa\") " pod="openstack/openstack-galera-0" Feb 02 17:29:38 crc kubenswrapper[4858]: I0202 17:29:38.170788 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3a24f351-b5a8-444d-b67d-7b9635f5a8aa-kolla-config\") pod \"openstack-galera-0\" (UID: \"3a24f351-b5a8-444d-b67d-7b9635f5a8aa\") " pod="openstack/openstack-galera-0" Feb 02 17:29:38 crc kubenswrapper[4858]: I0202 17:29:38.182503 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a24f351-b5a8-444d-b67d-7b9635f5a8aa-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"3a24f351-b5a8-444d-b67d-7b9635f5a8aa\") " pod="openstack/openstack-galera-0" Feb 02 17:29:38 crc kubenswrapper[4858]: I0202 17:29:38.187580 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a24f351-b5a8-444d-b67d-7b9635f5a8aa-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"3a24f351-b5a8-444d-b67d-7b9635f5a8aa\") " pod="openstack/openstack-galera-0" Feb 02 17:29:38 crc kubenswrapper[4858]: I0202 17:29:38.188900 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k9sq\" (UniqueName: \"kubernetes.io/projected/3a24f351-b5a8-444d-b67d-7b9635f5a8aa-kube-api-access-5k9sq\") pod \"openstack-galera-0\" (UID: \"3a24f351-b5a8-444d-b67d-7b9635f5a8aa\") " pod="openstack/openstack-galera-0" Feb 02 17:29:38 crc kubenswrapper[4858]: I0202 17:29:38.214114 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"3a24f351-b5a8-444d-b67d-7b9635f5a8aa\") " pod="openstack/openstack-galera-0" Feb 02 17:29:38 crc kubenswrapper[4858]: I0202 17:29:38.271152 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.113390 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.118290 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.120277 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-4tfdh" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.121938 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.122088 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.122293 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.122324 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.282858 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8a3a3fdc-3021-44f0-8520-da5a88cf03e1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"8a3a3fdc-3021-44f0-8520-da5a88cf03e1\") " pod="openstack/openstack-cell1-galera-0" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.282927 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8a3a3fdc-3021-44f0-8520-da5a88cf03e1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"8a3a3fdc-3021-44f0-8520-da5a88cf03e1\") " pod="openstack/openstack-cell1-galera-0" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.283188 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a3a3fdc-3021-44f0-8520-da5a88cf03e1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"8a3a3fdc-3021-44f0-8520-da5a88cf03e1\") " pod="openstack/openstack-cell1-galera-0" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.283255 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2w52\" (UniqueName: \"kubernetes.io/projected/8a3a3fdc-3021-44f0-8520-da5a88cf03e1-kube-api-access-b2w52\") pod \"openstack-cell1-galera-0\" (UID: \"8a3a3fdc-3021-44f0-8520-da5a88cf03e1\") " pod="openstack/openstack-cell1-galera-0" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.283327 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"8a3a3fdc-3021-44f0-8520-da5a88cf03e1\") " pod="openstack/openstack-cell1-galera-0" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.283421 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a3a3fdc-3021-44f0-8520-da5a88cf03e1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"8a3a3fdc-3021-44f0-8520-da5a88cf03e1\") " pod="openstack/openstack-cell1-galera-0" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.283495 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a3a3fdc-3021-44f0-8520-da5a88cf03e1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"8a3a3fdc-3021-44f0-8520-da5a88cf03e1\") " pod="openstack/openstack-cell1-galera-0" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.283602 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8a3a3fdc-3021-44f0-8520-da5a88cf03e1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"8a3a3fdc-3021-44f0-8520-da5a88cf03e1\") " pod="openstack/openstack-cell1-galera-0" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.384640 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8a3a3fdc-3021-44f0-8520-da5a88cf03e1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"8a3a3fdc-3021-44f0-8520-da5a88cf03e1\") " pod="openstack/openstack-cell1-galera-0" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.384704 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8a3a3fdc-3021-44f0-8520-da5a88cf03e1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"8a3a3fdc-3021-44f0-8520-da5a88cf03e1\") " pod="openstack/openstack-cell1-galera-0" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.384743 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8a3a3fdc-3021-44f0-8520-da5a88cf03e1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"8a3a3fdc-3021-44f0-8520-da5a88cf03e1\") " pod="openstack/openstack-cell1-galera-0" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.384780 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a3a3fdc-3021-44f0-8520-da5a88cf03e1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"8a3a3fdc-3021-44f0-8520-da5a88cf03e1\") " pod="openstack/openstack-cell1-galera-0" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.384798 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2w52\" (UniqueName: \"kubernetes.io/projected/8a3a3fdc-3021-44f0-8520-da5a88cf03e1-kube-api-access-b2w52\") pod \"openstack-cell1-galera-0\" (UID: \"8a3a3fdc-3021-44f0-8520-da5a88cf03e1\") " pod="openstack/openstack-cell1-galera-0" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.384819 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"8a3a3fdc-3021-44f0-8520-da5a88cf03e1\") " pod="openstack/openstack-cell1-galera-0" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.384853 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a3a3fdc-3021-44f0-8520-da5a88cf03e1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"8a3a3fdc-3021-44f0-8520-da5a88cf03e1\") " pod="openstack/openstack-cell1-galera-0" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.384872 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a3a3fdc-3021-44f0-8520-da5a88cf03e1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"8a3a3fdc-3021-44f0-8520-da5a88cf03e1\") " pod="openstack/openstack-cell1-galera-0" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.386182 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a3a3fdc-3021-44f0-8520-da5a88cf03e1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"8a3a3fdc-3021-44f0-8520-da5a88cf03e1\") " pod="openstack/openstack-cell1-galera-0" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.386402 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8a3a3fdc-3021-44f0-8520-da5a88cf03e1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"8a3a3fdc-3021-44f0-8520-da5a88cf03e1\") " pod="openstack/openstack-cell1-galera-0" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.386988 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8a3a3fdc-3021-44f0-8520-da5a88cf03e1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"8a3a3fdc-3021-44f0-8520-da5a88cf03e1\") " pod="openstack/openstack-cell1-galera-0" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.387373 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8a3a3fdc-3021-44f0-8520-da5a88cf03e1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"8a3a3fdc-3021-44f0-8520-da5a88cf03e1\") " pod="openstack/openstack-cell1-galera-0" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.388013 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"8a3a3fdc-3021-44f0-8520-da5a88cf03e1\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-cell1-galera-0" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.390846 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a3a3fdc-3021-44f0-8520-da5a88cf03e1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"8a3a3fdc-3021-44f0-8520-da5a88cf03e1\") " pod="openstack/openstack-cell1-galera-0" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.405046 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a3a3fdc-3021-44f0-8520-da5a88cf03e1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"8a3a3fdc-3021-44f0-8520-da5a88cf03e1\") " pod="openstack/openstack-cell1-galera-0" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.406636 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"8a3a3fdc-3021-44f0-8520-da5a88cf03e1\") " pod="openstack/openstack-cell1-galera-0" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.409599 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2w52\" (UniqueName: \"kubernetes.io/projected/8a3a3fdc-3021-44f0-8520-da5a88cf03e1-kube-api-access-b2w52\") pod \"openstack-cell1-galera-0\" (UID: \"8a3a3fdc-3021-44f0-8520-da5a88cf03e1\") " pod="openstack/openstack-cell1-galera-0" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.450042 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.671463 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.672572 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.675357 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-cngcj" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.675543 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.675655 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.696408 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.799635 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c386da2d-4b55-47da-aa8c-82b879ae7d3d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c386da2d-4b55-47da-aa8c-82b879ae7d3d\") " pod="openstack/memcached-0" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.799955 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c386da2d-4b55-47da-aa8c-82b879ae7d3d-config-data\") pod \"memcached-0\" (UID: \"c386da2d-4b55-47da-aa8c-82b879ae7d3d\") " pod="openstack/memcached-0" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.800124 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c386da2d-4b55-47da-aa8c-82b879ae7d3d-kolla-config\") pod \"memcached-0\" (UID: \"c386da2d-4b55-47da-aa8c-82b879ae7d3d\") " pod="openstack/memcached-0" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.800244 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c386da2d-4b55-47da-aa8c-82b879ae7d3d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c386da2d-4b55-47da-aa8c-82b879ae7d3d\") " pod="openstack/memcached-0" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.800405 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm7zx\" (UniqueName: \"kubernetes.io/projected/c386da2d-4b55-47da-aa8c-82b879ae7d3d-kube-api-access-mm7zx\") pod \"memcached-0\" (UID: \"c386da2d-4b55-47da-aa8c-82b879ae7d3d\") " pod="openstack/memcached-0" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.901876 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c386da2d-4b55-47da-aa8c-82b879ae7d3d-kolla-config\") pod \"memcached-0\" (UID: \"c386da2d-4b55-47da-aa8c-82b879ae7d3d\") " pod="openstack/memcached-0" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.902151 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c386da2d-4b55-47da-aa8c-82b879ae7d3d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c386da2d-4b55-47da-aa8c-82b879ae7d3d\") " pod="openstack/memcached-0" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.902255 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mm7zx\" (UniqueName: \"kubernetes.io/projected/c386da2d-4b55-47da-aa8c-82b879ae7d3d-kube-api-access-mm7zx\") pod \"memcached-0\" (UID: \"c386da2d-4b55-47da-aa8c-82b879ae7d3d\") " pod="openstack/memcached-0" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.902388 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c386da2d-4b55-47da-aa8c-82b879ae7d3d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c386da2d-4b55-47da-aa8c-82b879ae7d3d\") " pod="openstack/memcached-0" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.902480 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c386da2d-4b55-47da-aa8c-82b879ae7d3d-config-data\") pod \"memcached-0\" (UID: \"c386da2d-4b55-47da-aa8c-82b879ae7d3d\") " pod="openstack/memcached-0" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.903314 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c386da2d-4b55-47da-aa8c-82b879ae7d3d-kolla-config\") pod \"memcached-0\" (UID: \"c386da2d-4b55-47da-aa8c-82b879ae7d3d\") " pod="openstack/memcached-0" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.903429 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c386da2d-4b55-47da-aa8c-82b879ae7d3d-config-data\") pod \"memcached-0\" (UID: \"c386da2d-4b55-47da-aa8c-82b879ae7d3d\") " pod="openstack/memcached-0" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.907450 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c386da2d-4b55-47da-aa8c-82b879ae7d3d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c386da2d-4b55-47da-aa8c-82b879ae7d3d\") " pod="openstack/memcached-0" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.909484 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c386da2d-4b55-47da-aa8c-82b879ae7d3d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c386da2d-4b55-47da-aa8c-82b879ae7d3d\") " pod="openstack/memcached-0" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.919653 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mm7zx\" (UniqueName: \"kubernetes.io/projected/c386da2d-4b55-47da-aa8c-82b879ae7d3d-kube-api-access-mm7zx\") pod \"memcached-0\" (UID: \"c386da2d-4b55-47da-aa8c-82b879ae7d3d\") " pod="openstack/memcached-0" Feb 02 17:29:39 crc kubenswrapper[4858]: I0202 17:29:39.990420 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 02 17:29:42 crc kubenswrapper[4858]: I0202 17:29:42.026854 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 17:29:42 crc kubenswrapper[4858]: I0202 17:29:42.027996 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 17:29:42 crc kubenswrapper[4858]: I0202 17:29:42.038607 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-cdvlp" Feb 02 17:29:42 crc kubenswrapper[4858]: I0202 17:29:42.049702 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 17:29:42 crc kubenswrapper[4858]: I0202 17:29:42.135863 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2k5g\" (UniqueName: \"kubernetes.io/projected/59a6029b-5965-40a3-9dbd-0b4784340ce0-kube-api-access-r2k5g\") pod \"kube-state-metrics-0\" (UID: \"59a6029b-5965-40a3-9dbd-0b4784340ce0\") " pod="openstack/kube-state-metrics-0" Feb 02 17:29:42 crc kubenswrapper[4858]: I0202 17:29:42.237269 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2k5g\" (UniqueName: \"kubernetes.io/projected/59a6029b-5965-40a3-9dbd-0b4784340ce0-kube-api-access-r2k5g\") pod \"kube-state-metrics-0\" (UID: \"59a6029b-5965-40a3-9dbd-0b4784340ce0\") " pod="openstack/kube-state-metrics-0" Feb 02 17:29:42 crc kubenswrapper[4858]: I0202 17:29:42.260674 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2k5g\" (UniqueName: \"kubernetes.io/projected/59a6029b-5965-40a3-9dbd-0b4784340ce0-kube-api-access-r2k5g\") pod \"kube-state-metrics-0\" (UID: \"59a6029b-5965-40a3-9dbd-0b4784340ce0\") " pod="openstack/kube-state-metrics-0" Feb 02 17:29:42 crc kubenswrapper[4858]: I0202 17:29:42.345879 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 17:29:44 crc kubenswrapper[4858]: I0202 17:29:44.838756 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-tc4gv"] Feb 02 17:29:44 crc kubenswrapper[4858]: I0202 17:29:44.840826 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-tc4gv" Feb 02 17:29:44 crc kubenswrapper[4858]: I0202 17:29:44.844014 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-v6z8r" Feb 02 17:29:44 crc kubenswrapper[4858]: I0202 17:29:44.844235 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 02 17:29:44 crc kubenswrapper[4858]: I0202 17:29:44.855439 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-h6kmt"] Feb 02 17:29:44 crc kubenswrapper[4858]: I0202 17:29:44.857654 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-h6kmt" Feb 02 17:29:44 crc kubenswrapper[4858]: I0202 17:29:44.860431 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 02 17:29:44 crc kubenswrapper[4858]: I0202 17:29:44.890680 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-h6kmt"] Feb 02 17:29:44 crc kubenswrapper[4858]: I0202 17:29:44.908211 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-tc4gv"] Feb 02 17:29:44 crc kubenswrapper[4858]: I0202 17:29:44.984563 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/334dab9b-9793-4424-9c39-27eac5f07626-combined-ca-bundle\") pod \"ovn-controller-h6kmt\" (UID: \"334dab9b-9793-4424-9c39-27eac5f07626\") " pod="openstack/ovn-controller-h6kmt" Feb 02 17:29:44 crc kubenswrapper[4858]: I0202 17:29:44.984608 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/334dab9b-9793-4424-9c39-27eac5f07626-var-run-ovn\") pod \"ovn-controller-h6kmt\" (UID: \"334dab9b-9793-4424-9c39-27eac5f07626\") " pod="openstack/ovn-controller-h6kmt" Feb 02 17:29:44 crc kubenswrapper[4858]: I0202 17:29:44.984644 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/334dab9b-9793-4424-9c39-27eac5f07626-var-log-ovn\") pod \"ovn-controller-h6kmt\" (UID: \"334dab9b-9793-4424-9c39-27eac5f07626\") " pod="openstack/ovn-controller-h6kmt" Feb 02 17:29:44 crc kubenswrapper[4858]: I0202 17:29:44.984675 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrnbl\" (UniqueName: \"kubernetes.io/projected/334dab9b-9793-4424-9c39-27eac5f07626-kube-api-access-wrnbl\") pod \"ovn-controller-h6kmt\" (UID: \"334dab9b-9793-4424-9c39-27eac5f07626\") " pod="openstack/ovn-controller-h6kmt" Feb 02 17:29:44 crc kubenswrapper[4858]: I0202 17:29:44.984699 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/77df6a52-36fd-44ea-b30e-33041ed49ed6-var-log\") pod \"ovn-controller-ovs-tc4gv\" (UID: \"77df6a52-36fd-44ea-b30e-33041ed49ed6\") " pod="openstack/ovn-controller-ovs-tc4gv" Feb 02 17:29:44 crc kubenswrapper[4858]: I0202 17:29:44.984714 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/334dab9b-9793-4424-9c39-27eac5f07626-ovn-controller-tls-certs\") pod \"ovn-controller-h6kmt\" (UID: \"334dab9b-9793-4424-9c39-27eac5f07626\") " pod="openstack/ovn-controller-h6kmt" Feb 02 17:29:44 crc kubenswrapper[4858]: I0202 17:29:44.984730 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/77df6a52-36fd-44ea-b30e-33041ed49ed6-var-lib\") pod \"ovn-controller-ovs-tc4gv\" (UID: \"77df6a52-36fd-44ea-b30e-33041ed49ed6\") " pod="openstack/ovn-controller-ovs-tc4gv" Feb 02 17:29:44 crc kubenswrapper[4858]: I0202 17:29:44.984800 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mhfv\" (UniqueName: \"kubernetes.io/projected/77df6a52-36fd-44ea-b30e-33041ed49ed6-kube-api-access-2mhfv\") pod \"ovn-controller-ovs-tc4gv\" (UID: \"77df6a52-36fd-44ea-b30e-33041ed49ed6\") " pod="openstack/ovn-controller-ovs-tc4gv" Feb 02 17:29:44 crc kubenswrapper[4858]: I0202 17:29:44.984820 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/334dab9b-9793-4424-9c39-27eac5f07626-scripts\") pod \"ovn-controller-h6kmt\" (UID: \"334dab9b-9793-4424-9c39-27eac5f07626\") " pod="openstack/ovn-controller-h6kmt" Feb 02 17:29:44 crc kubenswrapper[4858]: I0202 17:29:44.984904 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/77df6a52-36fd-44ea-b30e-33041ed49ed6-etc-ovs\") pod \"ovn-controller-ovs-tc4gv\" (UID: \"77df6a52-36fd-44ea-b30e-33041ed49ed6\") " pod="openstack/ovn-controller-ovs-tc4gv" Feb 02 17:29:44 crc kubenswrapper[4858]: I0202 17:29:44.984941 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/77df6a52-36fd-44ea-b30e-33041ed49ed6-scripts\") pod \"ovn-controller-ovs-tc4gv\" (UID: \"77df6a52-36fd-44ea-b30e-33041ed49ed6\") " pod="openstack/ovn-controller-ovs-tc4gv" Feb 02 17:29:44 crc kubenswrapper[4858]: I0202 17:29:44.985025 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/334dab9b-9793-4424-9c39-27eac5f07626-var-run\") pod \"ovn-controller-h6kmt\" (UID: \"334dab9b-9793-4424-9c39-27eac5f07626\") " pod="openstack/ovn-controller-h6kmt" Feb 02 17:29:44 crc kubenswrapper[4858]: I0202 17:29:44.985119 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/77df6a52-36fd-44ea-b30e-33041ed49ed6-var-run\") pod \"ovn-controller-ovs-tc4gv\" (UID: \"77df6a52-36fd-44ea-b30e-33041ed49ed6\") " pod="openstack/ovn-controller-ovs-tc4gv" Feb 02 17:29:45 crc kubenswrapper[4858]: I0202 17:29:45.086055 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/334dab9b-9793-4424-9c39-27eac5f07626-var-run\") pod \"ovn-controller-h6kmt\" (UID: \"334dab9b-9793-4424-9c39-27eac5f07626\") " pod="openstack/ovn-controller-h6kmt" Feb 02 17:29:45 crc kubenswrapper[4858]: I0202 17:29:45.086129 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/77df6a52-36fd-44ea-b30e-33041ed49ed6-var-run\") pod \"ovn-controller-ovs-tc4gv\" (UID: \"77df6a52-36fd-44ea-b30e-33041ed49ed6\") " pod="openstack/ovn-controller-ovs-tc4gv" Feb 02 17:29:45 crc kubenswrapper[4858]: I0202 17:29:45.086161 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/334dab9b-9793-4424-9c39-27eac5f07626-combined-ca-bundle\") pod \"ovn-controller-h6kmt\" (UID: \"334dab9b-9793-4424-9c39-27eac5f07626\") " pod="openstack/ovn-controller-h6kmt" Feb 02 17:29:45 crc kubenswrapper[4858]: I0202 17:29:45.086191 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/334dab9b-9793-4424-9c39-27eac5f07626-var-run-ovn\") pod \"ovn-controller-h6kmt\" (UID: \"334dab9b-9793-4424-9c39-27eac5f07626\") " pod="openstack/ovn-controller-h6kmt" Feb 02 17:29:45 crc kubenswrapper[4858]: I0202 17:29:45.086225 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/334dab9b-9793-4424-9c39-27eac5f07626-var-log-ovn\") pod \"ovn-controller-h6kmt\" (UID: \"334dab9b-9793-4424-9c39-27eac5f07626\") " pod="openstack/ovn-controller-h6kmt" Feb 02 17:29:45 crc kubenswrapper[4858]: I0202 17:29:45.086242 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrnbl\" (UniqueName: \"kubernetes.io/projected/334dab9b-9793-4424-9c39-27eac5f07626-kube-api-access-wrnbl\") pod \"ovn-controller-h6kmt\" (UID: \"334dab9b-9793-4424-9c39-27eac5f07626\") " pod="openstack/ovn-controller-h6kmt" Feb 02 17:29:45 crc kubenswrapper[4858]: I0202 17:29:45.086267 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/334dab9b-9793-4424-9c39-27eac5f07626-ovn-controller-tls-certs\") pod \"ovn-controller-h6kmt\" (UID: \"334dab9b-9793-4424-9c39-27eac5f07626\") " pod="openstack/ovn-controller-h6kmt" Feb 02 17:29:45 crc kubenswrapper[4858]: I0202 17:29:45.086291 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/77df6a52-36fd-44ea-b30e-33041ed49ed6-var-log\") pod \"ovn-controller-ovs-tc4gv\" (UID: \"77df6a52-36fd-44ea-b30e-33041ed49ed6\") " pod="openstack/ovn-controller-ovs-tc4gv" Feb 02 17:29:45 crc kubenswrapper[4858]: I0202 17:29:45.086313 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/77df6a52-36fd-44ea-b30e-33041ed49ed6-var-lib\") pod \"ovn-controller-ovs-tc4gv\" (UID: \"77df6a52-36fd-44ea-b30e-33041ed49ed6\") " pod="openstack/ovn-controller-ovs-tc4gv" Feb 02 17:29:45 crc kubenswrapper[4858]: I0202 17:29:45.086359 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mhfv\" (UniqueName: \"kubernetes.io/projected/77df6a52-36fd-44ea-b30e-33041ed49ed6-kube-api-access-2mhfv\") pod \"ovn-controller-ovs-tc4gv\" (UID: \"77df6a52-36fd-44ea-b30e-33041ed49ed6\") " pod="openstack/ovn-controller-ovs-tc4gv" Feb 02 17:29:45 crc kubenswrapper[4858]: I0202 17:29:45.086387 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/334dab9b-9793-4424-9c39-27eac5f07626-scripts\") pod \"ovn-controller-h6kmt\" (UID: \"334dab9b-9793-4424-9c39-27eac5f07626\") " pod="openstack/ovn-controller-h6kmt" Feb 02 17:29:45 crc kubenswrapper[4858]: I0202 17:29:45.086418 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/77df6a52-36fd-44ea-b30e-33041ed49ed6-etc-ovs\") pod \"ovn-controller-ovs-tc4gv\" (UID: \"77df6a52-36fd-44ea-b30e-33041ed49ed6\") " pod="openstack/ovn-controller-ovs-tc4gv" Feb 02 17:29:45 crc kubenswrapper[4858]: I0202 17:29:45.086435 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/77df6a52-36fd-44ea-b30e-33041ed49ed6-scripts\") pod \"ovn-controller-ovs-tc4gv\" (UID: \"77df6a52-36fd-44ea-b30e-33041ed49ed6\") " pod="openstack/ovn-controller-ovs-tc4gv" Feb 02 17:29:45 crc kubenswrapper[4858]: I0202 17:29:45.087619 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/77df6a52-36fd-44ea-b30e-33041ed49ed6-var-run\") pod \"ovn-controller-ovs-tc4gv\" (UID: \"77df6a52-36fd-44ea-b30e-33041ed49ed6\") " pod="openstack/ovn-controller-ovs-tc4gv" Feb 02 17:29:45 crc kubenswrapper[4858]: I0202 17:29:45.087619 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/334dab9b-9793-4424-9c39-27eac5f07626-var-run\") pod \"ovn-controller-h6kmt\" (UID: \"334dab9b-9793-4424-9c39-27eac5f07626\") " pod="openstack/ovn-controller-h6kmt" Feb 02 17:29:45 crc kubenswrapper[4858]: I0202 17:29:45.087703 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/334dab9b-9793-4424-9c39-27eac5f07626-var-run-ovn\") pod \"ovn-controller-h6kmt\" (UID: \"334dab9b-9793-4424-9c39-27eac5f07626\") " pod="openstack/ovn-controller-h6kmt" Feb 02 17:29:45 crc kubenswrapper[4858]: I0202 17:29:45.087706 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/334dab9b-9793-4424-9c39-27eac5f07626-var-log-ovn\") pod \"ovn-controller-h6kmt\" (UID: \"334dab9b-9793-4424-9c39-27eac5f07626\") " pod="openstack/ovn-controller-h6kmt" Feb 02 17:29:45 crc kubenswrapper[4858]: I0202 17:29:45.087766 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/77df6a52-36fd-44ea-b30e-33041ed49ed6-var-log\") pod \"ovn-controller-ovs-tc4gv\" (UID: \"77df6a52-36fd-44ea-b30e-33041ed49ed6\") " pod="openstack/ovn-controller-ovs-tc4gv" Feb 02 17:29:45 crc kubenswrapper[4858]: I0202 17:29:45.087912 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/77df6a52-36fd-44ea-b30e-33041ed49ed6-var-lib\") pod \"ovn-controller-ovs-tc4gv\" (UID: \"77df6a52-36fd-44ea-b30e-33041ed49ed6\") " pod="openstack/ovn-controller-ovs-tc4gv" Feb 02 17:29:45 crc kubenswrapper[4858]: I0202 17:29:45.087916 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/77df6a52-36fd-44ea-b30e-33041ed49ed6-etc-ovs\") pod \"ovn-controller-ovs-tc4gv\" (UID: \"77df6a52-36fd-44ea-b30e-33041ed49ed6\") " pod="openstack/ovn-controller-ovs-tc4gv" Feb 02 17:29:45 crc kubenswrapper[4858]: I0202 17:29:45.089821 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/77df6a52-36fd-44ea-b30e-33041ed49ed6-scripts\") pod \"ovn-controller-ovs-tc4gv\" (UID: \"77df6a52-36fd-44ea-b30e-33041ed49ed6\") " pod="openstack/ovn-controller-ovs-tc4gv" Feb 02 17:29:45 crc kubenswrapper[4858]: I0202 17:29:45.090111 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/334dab9b-9793-4424-9c39-27eac5f07626-scripts\") pod \"ovn-controller-h6kmt\" (UID: \"334dab9b-9793-4424-9c39-27eac5f07626\") " pod="openstack/ovn-controller-h6kmt" Feb 02 17:29:45 crc kubenswrapper[4858]: I0202 17:29:45.093385 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/334dab9b-9793-4424-9c39-27eac5f07626-combined-ca-bundle\") pod \"ovn-controller-h6kmt\" (UID: \"334dab9b-9793-4424-9c39-27eac5f07626\") " pod="openstack/ovn-controller-h6kmt" Feb 02 17:29:45 crc kubenswrapper[4858]: I0202 17:29:45.098482 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/334dab9b-9793-4424-9c39-27eac5f07626-ovn-controller-tls-certs\") pod \"ovn-controller-h6kmt\" (UID: \"334dab9b-9793-4424-9c39-27eac5f07626\") " pod="openstack/ovn-controller-h6kmt" Feb 02 17:29:45 crc kubenswrapper[4858]: I0202 17:29:45.103094 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrnbl\" (UniqueName: \"kubernetes.io/projected/334dab9b-9793-4424-9c39-27eac5f07626-kube-api-access-wrnbl\") pod \"ovn-controller-h6kmt\" (UID: \"334dab9b-9793-4424-9c39-27eac5f07626\") " pod="openstack/ovn-controller-h6kmt" Feb 02 17:29:45 crc kubenswrapper[4858]: I0202 17:29:45.106239 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mhfv\" (UniqueName: \"kubernetes.io/projected/77df6a52-36fd-44ea-b30e-33041ed49ed6-kube-api-access-2mhfv\") pod \"ovn-controller-ovs-tc4gv\" (UID: \"77df6a52-36fd-44ea-b30e-33041ed49ed6\") " pod="openstack/ovn-controller-ovs-tc4gv" Feb 02 17:29:45 crc kubenswrapper[4858]: I0202 17:29:45.156997 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-tc4gv" Feb 02 17:29:45 crc kubenswrapper[4858]: I0202 17:29:45.173471 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-h6kmt" Feb 02 17:29:45 crc kubenswrapper[4858]: I0202 17:29:45.737882 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 17:29:45 crc kubenswrapper[4858]: I0202 17:29:45.740058 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 02 17:29:45 crc kubenswrapper[4858]: I0202 17:29:45.742508 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 02 17:29:45 crc kubenswrapper[4858]: I0202 17:29:45.742725 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-8vjj5" Feb 02 17:29:45 crc kubenswrapper[4858]: I0202 17:29:45.742865 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 02 17:29:45 crc kubenswrapper[4858]: I0202 17:29:45.744064 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 02 17:29:45 crc kubenswrapper[4858]: I0202 17:29:45.744273 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 02 17:29:45 crc kubenswrapper[4858]: I0202 17:29:45.753404 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 17:29:45 crc kubenswrapper[4858]: I0202 17:29:45.898629 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10f1d4cf-2e13-41b0-b29a-f889e2acf0d0-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"10f1d4cf-2e13-41b0-b29a-f889e2acf0d0\") " pod="openstack/ovsdbserver-nb-0" Feb 02 17:29:45 crc kubenswrapper[4858]: I0202 17:29:45.898714 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10f1d4cf-2e13-41b0-b29a-f889e2acf0d0-config\") pod \"ovsdbserver-nb-0\" (UID: \"10f1d4cf-2e13-41b0-b29a-f889e2acf0d0\") " pod="openstack/ovsdbserver-nb-0" Feb 02 17:29:45 crc kubenswrapper[4858]: I0202 17:29:45.898760 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/10f1d4cf-2e13-41b0-b29a-f889e2acf0d0-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"10f1d4cf-2e13-41b0-b29a-f889e2acf0d0\") " pod="openstack/ovsdbserver-nb-0" Feb 02 17:29:45 crc kubenswrapper[4858]: I0202 17:29:45.898832 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"10f1d4cf-2e13-41b0-b29a-f889e2acf0d0\") " pod="openstack/ovsdbserver-nb-0" Feb 02 17:29:45 crc kubenswrapper[4858]: I0202 17:29:45.899055 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/10f1d4cf-2e13-41b0-b29a-f889e2acf0d0-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"10f1d4cf-2e13-41b0-b29a-f889e2acf0d0\") " pod="openstack/ovsdbserver-nb-0" Feb 02 17:29:45 crc kubenswrapper[4858]: I0202 17:29:45.899153 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/10f1d4cf-2e13-41b0-b29a-f889e2acf0d0-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"10f1d4cf-2e13-41b0-b29a-f889e2acf0d0\") " pod="openstack/ovsdbserver-nb-0" Feb 02 17:29:45 crc kubenswrapper[4858]: I0202 17:29:45.900205 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkmqj\" (UniqueName: \"kubernetes.io/projected/10f1d4cf-2e13-41b0-b29a-f889e2acf0d0-kube-api-access-jkmqj\") pod \"ovsdbserver-nb-0\" (UID: \"10f1d4cf-2e13-41b0-b29a-f889e2acf0d0\") " pod="openstack/ovsdbserver-nb-0" Feb 02 17:29:45 crc kubenswrapper[4858]: I0202 17:29:45.900300 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/10f1d4cf-2e13-41b0-b29a-f889e2acf0d0-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"10f1d4cf-2e13-41b0-b29a-f889e2acf0d0\") " pod="openstack/ovsdbserver-nb-0" Feb 02 17:29:46 crc kubenswrapper[4858]: I0202 17:29:46.001330 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkmqj\" (UniqueName: \"kubernetes.io/projected/10f1d4cf-2e13-41b0-b29a-f889e2acf0d0-kube-api-access-jkmqj\") pod \"ovsdbserver-nb-0\" (UID: \"10f1d4cf-2e13-41b0-b29a-f889e2acf0d0\") " pod="openstack/ovsdbserver-nb-0" Feb 02 17:29:46 crc kubenswrapper[4858]: I0202 17:29:46.001376 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/10f1d4cf-2e13-41b0-b29a-f889e2acf0d0-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"10f1d4cf-2e13-41b0-b29a-f889e2acf0d0\") " pod="openstack/ovsdbserver-nb-0" Feb 02 17:29:46 crc kubenswrapper[4858]: I0202 17:29:46.001405 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10f1d4cf-2e13-41b0-b29a-f889e2acf0d0-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"10f1d4cf-2e13-41b0-b29a-f889e2acf0d0\") " pod="openstack/ovsdbserver-nb-0" Feb 02 17:29:46 crc kubenswrapper[4858]: I0202 17:29:46.001431 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10f1d4cf-2e13-41b0-b29a-f889e2acf0d0-config\") pod \"ovsdbserver-nb-0\" (UID: \"10f1d4cf-2e13-41b0-b29a-f889e2acf0d0\") " pod="openstack/ovsdbserver-nb-0" Feb 02 17:29:46 crc kubenswrapper[4858]: I0202 17:29:46.001458 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/10f1d4cf-2e13-41b0-b29a-f889e2acf0d0-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"10f1d4cf-2e13-41b0-b29a-f889e2acf0d0\") " pod="openstack/ovsdbserver-nb-0" Feb 02 17:29:46 crc kubenswrapper[4858]: I0202 17:29:46.001501 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"10f1d4cf-2e13-41b0-b29a-f889e2acf0d0\") " pod="openstack/ovsdbserver-nb-0" Feb 02 17:29:46 crc kubenswrapper[4858]: I0202 17:29:46.001539 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/10f1d4cf-2e13-41b0-b29a-f889e2acf0d0-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"10f1d4cf-2e13-41b0-b29a-f889e2acf0d0\") " pod="openstack/ovsdbserver-nb-0" Feb 02 17:29:46 crc kubenswrapper[4858]: I0202 17:29:46.001576 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/10f1d4cf-2e13-41b0-b29a-f889e2acf0d0-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"10f1d4cf-2e13-41b0-b29a-f889e2acf0d0\") " pod="openstack/ovsdbserver-nb-0" Feb 02 17:29:46 crc kubenswrapper[4858]: I0202 17:29:46.002037 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/10f1d4cf-2e13-41b0-b29a-f889e2acf0d0-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"10f1d4cf-2e13-41b0-b29a-f889e2acf0d0\") " pod="openstack/ovsdbserver-nb-0" Feb 02 17:29:46 crc kubenswrapper[4858]: I0202 17:29:46.002070 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"10f1d4cf-2e13-41b0-b29a-f889e2acf0d0\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-nb-0" Feb 02 17:29:46 crc kubenswrapper[4858]: I0202 17:29:46.004121 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10f1d4cf-2e13-41b0-b29a-f889e2acf0d0-config\") pod \"ovsdbserver-nb-0\" (UID: \"10f1d4cf-2e13-41b0-b29a-f889e2acf0d0\") " pod="openstack/ovsdbserver-nb-0" Feb 02 17:29:46 crc kubenswrapper[4858]: I0202 17:29:46.004670 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/10f1d4cf-2e13-41b0-b29a-f889e2acf0d0-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"10f1d4cf-2e13-41b0-b29a-f889e2acf0d0\") " pod="openstack/ovsdbserver-nb-0" Feb 02 17:29:46 crc kubenswrapper[4858]: I0202 17:29:46.005637 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10f1d4cf-2e13-41b0-b29a-f889e2acf0d0-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"10f1d4cf-2e13-41b0-b29a-f889e2acf0d0\") " pod="openstack/ovsdbserver-nb-0" Feb 02 17:29:46 crc kubenswrapper[4858]: I0202 17:29:46.008497 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/10f1d4cf-2e13-41b0-b29a-f889e2acf0d0-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"10f1d4cf-2e13-41b0-b29a-f889e2acf0d0\") " pod="openstack/ovsdbserver-nb-0" Feb 02 17:29:46 crc kubenswrapper[4858]: I0202 17:29:46.013065 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/10f1d4cf-2e13-41b0-b29a-f889e2acf0d0-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"10f1d4cf-2e13-41b0-b29a-f889e2acf0d0\") " pod="openstack/ovsdbserver-nb-0" Feb 02 17:29:46 crc kubenswrapper[4858]: I0202 17:29:46.017593 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkmqj\" (UniqueName: \"kubernetes.io/projected/10f1d4cf-2e13-41b0-b29a-f889e2acf0d0-kube-api-access-jkmqj\") pod \"ovsdbserver-nb-0\" (UID: \"10f1d4cf-2e13-41b0-b29a-f889e2acf0d0\") " pod="openstack/ovsdbserver-nb-0" Feb 02 17:29:46 crc kubenswrapper[4858]: I0202 17:29:46.029347 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"10f1d4cf-2e13-41b0-b29a-f889e2acf0d0\") " pod="openstack/ovsdbserver-nb-0" Feb 02 17:29:46 crc kubenswrapper[4858]: I0202 17:29:46.060450 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 02 17:29:49 crc kubenswrapper[4858]: I0202 17:29:49.008722 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 17:29:49 crc kubenswrapper[4858]: I0202 17:29:49.010430 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 02 17:29:49 crc kubenswrapper[4858]: I0202 17:29:49.012966 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 02 17:29:49 crc kubenswrapper[4858]: I0202 17:29:49.013230 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 02 17:29:49 crc kubenswrapper[4858]: I0202 17:29:49.014697 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 02 17:29:49 crc kubenswrapper[4858]: I0202 17:29:49.018457 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 17:29:49 crc kubenswrapper[4858]: I0202 17:29:49.021320 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-vgzs7" Feb 02 17:29:49 crc kubenswrapper[4858]: I0202 17:29:49.152675 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a62694a3-fa2d-4765-ac02-3d19c4779d21-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a62694a3-fa2d-4765-ac02-3d19c4779d21\") " pod="openstack/ovsdbserver-sb-0" Feb 02 17:29:49 crc kubenswrapper[4858]: I0202 17:29:49.152741 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a62694a3-fa2d-4765-ac02-3d19c4779d21\") " pod="openstack/ovsdbserver-sb-0" Feb 02 17:29:49 crc kubenswrapper[4858]: I0202 17:29:49.152785 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a62694a3-fa2d-4765-ac02-3d19c4779d21-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a62694a3-fa2d-4765-ac02-3d19c4779d21\") " pod="openstack/ovsdbserver-sb-0" Feb 02 17:29:49 crc kubenswrapper[4858]: I0202 17:29:49.152808 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a62694a3-fa2d-4765-ac02-3d19c4779d21-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a62694a3-fa2d-4765-ac02-3d19c4779d21\") " pod="openstack/ovsdbserver-sb-0" Feb 02 17:29:49 crc kubenswrapper[4858]: I0202 17:29:49.152843 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a62694a3-fa2d-4765-ac02-3d19c4779d21-config\") pod \"ovsdbserver-sb-0\" (UID: \"a62694a3-fa2d-4765-ac02-3d19c4779d21\") " pod="openstack/ovsdbserver-sb-0" Feb 02 17:29:49 crc kubenswrapper[4858]: I0202 17:29:49.152863 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m865n\" (UniqueName: \"kubernetes.io/projected/a62694a3-fa2d-4765-ac02-3d19c4779d21-kube-api-access-m865n\") pod \"ovsdbserver-sb-0\" (UID: \"a62694a3-fa2d-4765-ac02-3d19c4779d21\") " pod="openstack/ovsdbserver-sb-0" Feb 02 17:29:49 crc kubenswrapper[4858]: I0202 17:29:49.152890 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a62694a3-fa2d-4765-ac02-3d19c4779d21-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a62694a3-fa2d-4765-ac02-3d19c4779d21\") " pod="openstack/ovsdbserver-sb-0" Feb 02 17:29:49 crc kubenswrapper[4858]: I0202 17:29:49.152923 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a62694a3-fa2d-4765-ac02-3d19c4779d21-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a62694a3-fa2d-4765-ac02-3d19c4779d21\") " pod="openstack/ovsdbserver-sb-0" Feb 02 17:29:49 crc kubenswrapper[4858]: I0202 17:29:49.254377 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a62694a3-fa2d-4765-ac02-3d19c4779d21\") " pod="openstack/ovsdbserver-sb-0" Feb 02 17:29:49 crc kubenswrapper[4858]: I0202 17:29:49.254677 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a62694a3-fa2d-4765-ac02-3d19c4779d21-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a62694a3-fa2d-4765-ac02-3d19c4779d21\") " pod="openstack/ovsdbserver-sb-0" Feb 02 17:29:49 crc kubenswrapper[4858]: I0202 17:29:49.254625 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a62694a3-fa2d-4765-ac02-3d19c4779d21\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-sb-0" Feb 02 17:29:49 crc kubenswrapper[4858]: I0202 17:29:49.254793 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a62694a3-fa2d-4765-ac02-3d19c4779d21-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a62694a3-fa2d-4765-ac02-3d19c4779d21\") " pod="openstack/ovsdbserver-sb-0" Feb 02 17:29:49 crc kubenswrapper[4858]: I0202 17:29:49.254827 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a62694a3-fa2d-4765-ac02-3d19c4779d21-config\") pod \"ovsdbserver-sb-0\" (UID: \"a62694a3-fa2d-4765-ac02-3d19c4779d21\") " pod="openstack/ovsdbserver-sb-0" Feb 02 17:29:49 crc kubenswrapper[4858]: I0202 17:29:49.254845 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m865n\" (UniqueName: \"kubernetes.io/projected/a62694a3-fa2d-4765-ac02-3d19c4779d21-kube-api-access-m865n\") pod \"ovsdbserver-sb-0\" (UID: \"a62694a3-fa2d-4765-ac02-3d19c4779d21\") " pod="openstack/ovsdbserver-sb-0" Feb 02 17:29:49 crc kubenswrapper[4858]: I0202 17:29:49.254865 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a62694a3-fa2d-4765-ac02-3d19c4779d21-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a62694a3-fa2d-4765-ac02-3d19c4779d21\") " pod="openstack/ovsdbserver-sb-0" Feb 02 17:29:49 crc kubenswrapper[4858]: I0202 17:29:49.254893 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a62694a3-fa2d-4765-ac02-3d19c4779d21-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a62694a3-fa2d-4765-ac02-3d19c4779d21\") " pod="openstack/ovsdbserver-sb-0" Feb 02 17:29:49 crc kubenswrapper[4858]: I0202 17:29:49.254946 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a62694a3-fa2d-4765-ac02-3d19c4779d21-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a62694a3-fa2d-4765-ac02-3d19c4779d21\") " pod="openstack/ovsdbserver-sb-0" Feb 02 17:29:49 crc kubenswrapper[4858]: I0202 17:29:49.255212 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a62694a3-fa2d-4765-ac02-3d19c4779d21-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a62694a3-fa2d-4765-ac02-3d19c4779d21\") " pod="openstack/ovsdbserver-sb-0" Feb 02 17:29:49 crc kubenswrapper[4858]: I0202 17:29:49.256013 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a62694a3-fa2d-4765-ac02-3d19c4779d21-config\") pod \"ovsdbserver-sb-0\" (UID: \"a62694a3-fa2d-4765-ac02-3d19c4779d21\") " pod="openstack/ovsdbserver-sb-0" Feb 02 17:29:49 crc kubenswrapper[4858]: I0202 17:29:49.256031 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a62694a3-fa2d-4765-ac02-3d19c4779d21-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a62694a3-fa2d-4765-ac02-3d19c4779d21\") " pod="openstack/ovsdbserver-sb-0" Feb 02 17:29:49 crc kubenswrapper[4858]: I0202 17:29:49.260096 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a62694a3-fa2d-4765-ac02-3d19c4779d21-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a62694a3-fa2d-4765-ac02-3d19c4779d21\") " pod="openstack/ovsdbserver-sb-0" Feb 02 17:29:49 crc kubenswrapper[4858]: I0202 17:29:49.260709 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a62694a3-fa2d-4765-ac02-3d19c4779d21-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a62694a3-fa2d-4765-ac02-3d19c4779d21\") " pod="openstack/ovsdbserver-sb-0" Feb 02 17:29:49 crc kubenswrapper[4858]: I0202 17:29:49.262139 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a62694a3-fa2d-4765-ac02-3d19c4779d21-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a62694a3-fa2d-4765-ac02-3d19c4779d21\") " pod="openstack/ovsdbserver-sb-0" Feb 02 17:29:49 crc kubenswrapper[4858]: I0202 17:29:49.277132 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m865n\" (UniqueName: \"kubernetes.io/projected/a62694a3-fa2d-4765-ac02-3d19c4779d21-kube-api-access-m865n\") pod \"ovsdbserver-sb-0\" (UID: \"a62694a3-fa2d-4765-ac02-3d19c4779d21\") " pod="openstack/ovsdbserver-sb-0" Feb 02 17:29:49 crc kubenswrapper[4858]: I0202 17:29:49.278362 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a62694a3-fa2d-4765-ac02-3d19c4779d21\") " pod="openstack/ovsdbserver-sb-0" Feb 02 17:29:49 crc kubenswrapper[4858]: I0202 17:29:49.341541 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 02 17:29:50 crc kubenswrapper[4858]: I0202 17:29:50.435416 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 17:29:50 crc kubenswrapper[4858]: E0202 17:29:50.807450 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 02 17:29:50 crc kubenswrapper[4858]: E0202 17:29:50.807627 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c5rxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-w6mdx_openstack(ff5d9e29-80ad-4627-ba54-0fd7bf4e84de): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 17:29:50 crc kubenswrapper[4858]: E0202 17:29:50.810102 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-w6mdx" podUID="ff5d9e29-80ad-4627-ba54-0fd7bf4e84de" Feb 02 17:29:50 crc kubenswrapper[4858]: E0202 17:29:50.868072 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 02 17:29:50 crc kubenswrapper[4858]: E0202 17:29:50.868541 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wxk6n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-q4f9j_openstack(db5cf574-1e7f-4bbd-8d06-b890e82bae03): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 17:29:50 crc kubenswrapper[4858]: E0202 17:29:50.870777 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-q4f9j" podUID="db5cf574-1e7f-4bbd-8d06-b890e82bae03" Feb 02 17:29:51 crc kubenswrapper[4858]: I0202 17:29:51.286786 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 02 17:29:51 crc kubenswrapper[4858]: I0202 17:29:51.291769 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 17:29:51 crc kubenswrapper[4858]: W0202 17:29:51.300683 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc386da2d_4b55_47da_aa8c_82b879ae7d3d.slice/crio-05e91ce4b12ebbc352f91268292ad7aad1b1ee203b9535db6e076e5eb90167c1 WatchSource:0}: Error finding container 05e91ce4b12ebbc352f91268292ad7aad1b1ee203b9535db6e076e5eb90167c1: Status 404 returned error can't find the container with id 05e91ce4b12ebbc352f91268292ad7aad1b1ee203b9535db6e076e5eb90167c1 Feb 02 17:29:51 crc kubenswrapper[4858]: I0202 17:29:51.445400 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 17:29:51 crc kubenswrapper[4858]: I0202 17:29:51.447960 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"c386da2d-4b55-47da-aa8c-82b879ae7d3d","Type":"ContainerStarted","Data":"05e91ce4b12ebbc352f91268292ad7aad1b1ee203b9535db6e076e5eb90167c1"} Feb 02 17:29:51 crc kubenswrapper[4858]: I0202 17:29:51.451289 4858 generic.go:334] "Generic (PLEG): container finished" podID="8094eb3b-7f98-407b-8e5d-551ef561716b" containerID="00e7d4d295a15320f7119126c7d643a79c58e0e6f2ea7a939217061106c47abb" exitCode=0 Feb 02 17:29:51 crc kubenswrapper[4858]: I0202 17:29:51.451396 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-4sv2z" event={"ID":"8094eb3b-7f98-407b-8e5d-551ef561716b","Type":"ContainerDied","Data":"00e7d4d295a15320f7119126c7d643a79c58e0e6f2ea7a939217061106c47abb"} Feb 02 17:29:51 crc kubenswrapper[4858]: W0202 17:29:51.452544 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a3a3fdc_3021_44f0_8520_da5a88cf03e1.slice/crio-8eaea1c00d4159755e9991f030c8c665fe7b8a62a052421011bbaf5e274c0316 WatchSource:0}: Error finding container 8eaea1c00d4159755e9991f030c8c665fe7b8a62a052421011bbaf5e274c0316: Status 404 returned error can't find the container with id 8eaea1c00d4159755e9991f030c8c665fe7b8a62a052421011bbaf5e274c0316 Feb 02 17:29:51 crc kubenswrapper[4858]: I0202 17:29:51.454534 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"55d221f1-91f9-4045-b94b-95facb25b3dc","Type":"ContainerStarted","Data":"f9d449e4bd13494166da1ff6f05c9980f3128b5c4c438d50e12b7d93d55bade3"} Feb 02 17:29:51 crc kubenswrapper[4858]: I0202 17:29:51.456308 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e","Type":"ContainerStarted","Data":"9474f2ccb29f07399c2e7b8bfe8174b364bd7677af0b818865cdde02a0a376a5"} Feb 02 17:29:51 crc kubenswrapper[4858]: I0202 17:29:51.459682 4858 generic.go:334] "Generic (PLEG): container finished" podID="5a4f5119-0d30-4fa2-87c3-55aa74010bec" containerID="2fa291207764f806d8bcc45453915e4a1286eebfc3704cbe6dbd0f877125ed69" exitCode=0 Feb 02 17:29:51 crc kubenswrapper[4858]: I0202 17:29:51.459869 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-s5lks" event={"ID":"5a4f5119-0d30-4fa2-87c3-55aa74010bec","Type":"ContainerDied","Data":"2fa291207764f806d8bcc45453915e4a1286eebfc3704cbe6dbd0f877125ed69"} Feb 02 17:29:51 crc kubenswrapper[4858]: I0202 17:29:51.534091 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 17:29:51 crc kubenswrapper[4858]: W0202 17:29:51.535881 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59a6029b_5965_40a3_9dbd_0b4784340ce0.slice/crio-68e339094d1b5d08f6ffd7e5729af4aa1b570ddd1e70d336db92bb286a602e53 WatchSource:0}: Error finding container 68e339094d1b5d08f6ffd7e5729af4aa1b570ddd1e70d336db92bb286a602e53: Status 404 returned error can't find the container with id 68e339094d1b5d08f6ffd7e5729af4aa1b570ddd1e70d336db92bb286a602e53 Feb 02 17:29:51 crc kubenswrapper[4858]: I0202 17:29:51.554220 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-h6kmt"] Feb 02 17:29:51 crc kubenswrapper[4858]: I0202 17:29:51.561059 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 02 17:29:51 crc kubenswrapper[4858]: W0202 17:29:51.602727 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a24f351_b5a8_444d_b67d_7b9635f5a8aa.slice/crio-e6a7bb273aa86f1466dc5287f259aa64ecb25b173070d2fe4ea06600f2089796 WatchSource:0}: Error finding container e6a7bb273aa86f1466dc5287f259aa64ecb25b173070d2fe4ea06600f2089796: Status 404 returned error can't find the container with id e6a7bb273aa86f1466dc5287f259aa64ecb25b173070d2fe4ea06600f2089796 Feb 02 17:29:51 crc kubenswrapper[4858]: I0202 17:29:51.639869 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 17:29:51 crc kubenswrapper[4858]: W0202 17:29:51.663718 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10f1d4cf_2e13_41b0_b29a_f889e2acf0d0.slice/crio-c80605d610b1b924dcb0b97b18fbed475bc93cb31fe4a4e6c99b778044dc74c4 WatchSource:0}: Error finding container c80605d610b1b924dcb0b97b18fbed475bc93cb31fe4a4e6c99b778044dc74c4: Status 404 returned error can't find the container with id c80605d610b1b924dcb0b97b18fbed475bc93cb31fe4a4e6c99b778044dc74c4 Feb 02 17:29:51 crc kubenswrapper[4858]: E0202 17:29:51.760941 4858 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 02 17:29:51 crc kubenswrapper[4858]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/5a4f5119-0d30-4fa2-87c3-55aa74010bec/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 02 17:29:51 crc kubenswrapper[4858]: > podSandboxID="98d16a965008f799a96fc98af47f6f82503cd38e2ba57d316a02f59681851cd5" Feb 02 17:29:51 crc kubenswrapper[4858]: E0202 17:29:51.761474 4858 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 02 17:29:51 crc kubenswrapper[4858]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7qcvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-s5lks_openstack(5a4f5119-0d30-4fa2-87c3-55aa74010bec): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/5a4f5119-0d30-4fa2-87c3-55aa74010bec/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 02 17:29:51 crc kubenswrapper[4858]: > logger="UnhandledError" Feb 02 17:29:51 crc kubenswrapper[4858]: E0202 17:29:51.764017 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/5a4f5119-0d30-4fa2-87c3-55aa74010bec/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-666b6646f7-s5lks" podUID="5a4f5119-0d30-4fa2-87c3-55aa74010bec" Feb 02 17:29:51 crc kubenswrapper[4858]: I0202 17:29:51.896864 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-w6mdx" Feb 02 17:29:51 crc kubenswrapper[4858]: I0202 17:29:51.987363 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-q4f9j" Feb 02 17:29:52 crc kubenswrapper[4858]: I0202 17:29:52.020442 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff5d9e29-80ad-4627-ba54-0fd7bf4e84de-dns-svc\") pod \"ff5d9e29-80ad-4627-ba54-0fd7bf4e84de\" (UID: \"ff5d9e29-80ad-4627-ba54-0fd7bf4e84de\") " Feb 02 17:29:52 crc kubenswrapper[4858]: I0202 17:29:52.020569 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5rxt\" (UniqueName: \"kubernetes.io/projected/ff5d9e29-80ad-4627-ba54-0fd7bf4e84de-kube-api-access-c5rxt\") pod \"ff5d9e29-80ad-4627-ba54-0fd7bf4e84de\" (UID: \"ff5d9e29-80ad-4627-ba54-0fd7bf4e84de\") " Feb 02 17:29:52 crc kubenswrapper[4858]: I0202 17:29:52.020600 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5d9e29-80ad-4627-ba54-0fd7bf4e84de-config\") pod \"ff5d9e29-80ad-4627-ba54-0fd7bf4e84de\" (UID: \"ff5d9e29-80ad-4627-ba54-0fd7bf4e84de\") " Feb 02 17:29:52 crc kubenswrapper[4858]: I0202 17:29:52.021542 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff5d9e29-80ad-4627-ba54-0fd7bf4e84de-config" (OuterVolumeSpecName: "config") pod "ff5d9e29-80ad-4627-ba54-0fd7bf4e84de" (UID: "ff5d9e29-80ad-4627-ba54-0fd7bf4e84de"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:29:52 crc kubenswrapper[4858]: I0202 17:29:52.021757 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff5d9e29-80ad-4627-ba54-0fd7bf4e84de-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ff5d9e29-80ad-4627-ba54-0fd7bf4e84de" (UID: "ff5d9e29-80ad-4627-ba54-0fd7bf4e84de"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:29:52 crc kubenswrapper[4858]: I0202 17:29:52.026681 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff5d9e29-80ad-4627-ba54-0fd7bf4e84de-kube-api-access-c5rxt" (OuterVolumeSpecName: "kube-api-access-c5rxt") pod "ff5d9e29-80ad-4627-ba54-0fd7bf4e84de" (UID: "ff5d9e29-80ad-4627-ba54-0fd7bf4e84de"). InnerVolumeSpecName "kube-api-access-c5rxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:29:52 crc kubenswrapper[4858]: I0202 17:29:52.122877 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxk6n\" (UniqueName: \"kubernetes.io/projected/db5cf574-1e7f-4bbd-8d06-b890e82bae03-kube-api-access-wxk6n\") pod \"db5cf574-1e7f-4bbd-8d06-b890e82bae03\" (UID: \"db5cf574-1e7f-4bbd-8d06-b890e82bae03\") " Feb 02 17:29:52 crc kubenswrapper[4858]: I0202 17:29:52.123032 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db5cf574-1e7f-4bbd-8d06-b890e82bae03-config\") pod \"db5cf574-1e7f-4bbd-8d06-b890e82bae03\" (UID: \"db5cf574-1e7f-4bbd-8d06-b890e82bae03\") " Feb 02 17:29:52 crc kubenswrapper[4858]: I0202 17:29:52.123549 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff5d9e29-80ad-4627-ba54-0fd7bf4e84de-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 17:29:52 crc kubenswrapper[4858]: I0202 17:29:52.123565 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5rxt\" (UniqueName: \"kubernetes.io/projected/ff5d9e29-80ad-4627-ba54-0fd7bf4e84de-kube-api-access-c5rxt\") on node \"crc\" DevicePath \"\"" Feb 02 17:29:52 crc kubenswrapper[4858]: I0202 17:29:52.123580 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5d9e29-80ad-4627-ba54-0fd7bf4e84de-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:29:52 crc kubenswrapper[4858]: I0202 17:29:52.123717 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db5cf574-1e7f-4bbd-8d06-b890e82bae03-config" (OuterVolumeSpecName: "config") pod "db5cf574-1e7f-4bbd-8d06-b890e82bae03" (UID: "db5cf574-1e7f-4bbd-8d06-b890e82bae03"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:29:52 crc kubenswrapper[4858]: I0202 17:29:52.126020 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db5cf574-1e7f-4bbd-8d06-b890e82bae03-kube-api-access-wxk6n" (OuterVolumeSpecName: "kube-api-access-wxk6n") pod "db5cf574-1e7f-4bbd-8d06-b890e82bae03" (UID: "db5cf574-1e7f-4bbd-8d06-b890e82bae03"). InnerVolumeSpecName "kube-api-access-wxk6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:29:52 crc kubenswrapper[4858]: I0202 17:29:52.235643 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxk6n\" (UniqueName: \"kubernetes.io/projected/db5cf574-1e7f-4bbd-8d06-b890e82bae03-kube-api-access-wxk6n\") on node \"crc\" DevicePath \"\"" Feb 02 17:29:52 crc kubenswrapper[4858]: I0202 17:29:52.235772 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db5cf574-1e7f-4bbd-8d06-b890e82bae03-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:29:52 crc kubenswrapper[4858]: I0202 17:29:52.315180 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 17:29:52 crc kubenswrapper[4858]: W0202 17:29:52.332332 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda62694a3_fa2d_4765_ac02_3d19c4779d21.slice/crio-d9babf8b36906fab7e35498bb0f9a702f1a7b19330d10d3d2d48a1c451bfedd3 WatchSource:0}: Error finding container d9babf8b36906fab7e35498bb0f9a702f1a7b19330d10d3d2d48a1c451bfedd3: Status 404 returned error can't find the container with id d9babf8b36906fab7e35498bb0f9a702f1a7b19330d10d3d2d48a1c451bfedd3 Feb 02 17:29:52 crc kubenswrapper[4858]: I0202 17:29:52.477612 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"8a3a3fdc-3021-44f0-8520-da5a88cf03e1","Type":"ContainerStarted","Data":"8eaea1c00d4159755e9991f030c8c665fe7b8a62a052421011bbaf5e274c0316"} Feb 02 17:29:52 crc kubenswrapper[4858]: I0202 17:29:52.479302 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a62694a3-fa2d-4765-ac02-3d19c4779d21","Type":"ContainerStarted","Data":"d9babf8b36906fab7e35498bb0f9a702f1a7b19330d10d3d2d48a1c451bfedd3"} Feb 02 17:29:52 crc kubenswrapper[4858]: I0202 17:29:52.481561 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-4sv2z" event={"ID":"8094eb3b-7f98-407b-8e5d-551ef561716b","Type":"ContainerStarted","Data":"baf2e236d5af896009e59c4cc2c80a1b43db2102b7512cffa0b4173555175a20"} Feb 02 17:29:52 crc kubenswrapper[4858]: I0202 17:29:52.481664 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-4sv2z" Feb 02 17:29:52 crc kubenswrapper[4858]: I0202 17:29:52.484263 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3a24f351-b5a8-444d-b67d-7b9635f5a8aa","Type":"ContainerStarted","Data":"e6a7bb273aa86f1466dc5287f259aa64ecb25b173070d2fe4ea06600f2089796"} Feb 02 17:29:52 crc kubenswrapper[4858]: I0202 17:29:52.486280 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-w6mdx" event={"ID":"ff5d9e29-80ad-4627-ba54-0fd7bf4e84de","Type":"ContainerDied","Data":"16f4c2675096c67a7f101bce4adedee9d21a108ec46b9e9fa2bcb2271d09b9a0"} Feb 02 17:29:52 crc kubenswrapper[4858]: I0202 17:29:52.486305 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-w6mdx" Feb 02 17:29:52 crc kubenswrapper[4858]: I0202 17:29:52.487537 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"59a6029b-5965-40a3-9dbd-0b4784340ce0","Type":"ContainerStarted","Data":"68e339094d1b5d08f6ffd7e5729af4aa1b570ddd1e70d336db92bb286a602e53"} Feb 02 17:29:52 crc kubenswrapper[4858]: I0202 17:29:52.488563 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"10f1d4cf-2e13-41b0-b29a-f889e2acf0d0","Type":"ContainerStarted","Data":"c80605d610b1b924dcb0b97b18fbed475bc93cb31fe4a4e6c99b778044dc74c4"} Feb 02 17:29:52 crc kubenswrapper[4858]: I0202 17:29:52.490339 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-h6kmt" event={"ID":"334dab9b-9793-4424-9c39-27eac5f07626","Type":"ContainerStarted","Data":"7262f5e1d72e3dec523f3133c58ba5925ab6501680dae827ce2aad8b3b699c8c"} Feb 02 17:29:52 crc kubenswrapper[4858]: I0202 17:29:52.491802 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-q4f9j" event={"ID":"db5cf574-1e7f-4bbd-8d06-b890e82bae03","Type":"ContainerDied","Data":"fb7dc5618330448d76ae7645168966a0cffd822e0cc1189288cfc9bf1d591b4e"} Feb 02 17:29:52 crc kubenswrapper[4858]: I0202 17:29:52.491821 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-q4f9j" Feb 02 17:29:52 crc kubenswrapper[4858]: I0202 17:29:52.511497 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-4sv2z" podStartSLOduration=3.116179407 podStartE2EDuration="17.511477249s" podCreationTimestamp="2026-02-02 17:29:35 +0000 UTC" firstStartedPulling="2026-02-02 17:29:36.516077796 +0000 UTC m=+877.668493061" lastFinishedPulling="2026-02-02 17:29:50.911375638 +0000 UTC m=+892.063790903" observedRunningTime="2026-02-02 17:29:52.502737903 +0000 UTC m=+893.655153178" watchObservedRunningTime="2026-02-02 17:29:52.511477249 +0000 UTC m=+893.663892514" Feb 02 17:29:52 crc kubenswrapper[4858]: I0202 17:29:52.535358 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-q4f9j"] Feb 02 17:29:52 crc kubenswrapper[4858]: I0202 17:29:52.544569 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-q4f9j"] Feb 02 17:29:52 crc kubenswrapper[4858]: I0202 17:29:52.572373 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-w6mdx"] Feb 02 17:29:52 crc kubenswrapper[4858]: I0202 17:29:52.573927 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-w6mdx"] Feb 02 17:29:52 crc kubenswrapper[4858]: I0202 17:29:52.724497 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-tc4gv"] Feb 02 17:29:53 crc kubenswrapper[4858]: I0202 17:29:53.501027 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-s5lks" event={"ID":"5a4f5119-0d30-4fa2-87c3-55aa74010bec","Type":"ContainerStarted","Data":"03792f22e2924c5fd2570f11952874aa6e91ea74c86c14837ffd21c900e3e1d9"} Feb 02 17:29:53 crc kubenswrapper[4858]: I0202 17:29:53.501260 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-s5lks" Feb 02 17:29:53 crc kubenswrapper[4858]: I0202 17:29:53.504876 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tc4gv" event={"ID":"77df6a52-36fd-44ea-b30e-33041ed49ed6","Type":"ContainerStarted","Data":"c3f4d883db7a53efed77c9c3a61b56d9caaca81a2fe80556d57a914710e4b191"} Feb 02 17:29:53 crc kubenswrapper[4858]: I0202 17:29:53.520672 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-s5lks" podStartSLOduration=3.785322962 podStartE2EDuration="18.520656961s" podCreationTimestamp="2026-02-02 17:29:35 +0000 UTC" firstStartedPulling="2026-02-02 17:29:36.197396341 +0000 UTC m=+877.349811616" lastFinishedPulling="2026-02-02 17:29:50.93273035 +0000 UTC m=+892.085145615" observedRunningTime="2026-02-02 17:29:53.515815214 +0000 UTC m=+894.668230489" watchObservedRunningTime="2026-02-02 17:29:53.520656961 +0000 UTC m=+894.673072226" Feb 02 17:29:54 crc kubenswrapper[4858]: I0202 17:29:54.415150 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db5cf574-1e7f-4bbd-8d06-b890e82bae03" path="/var/lib/kubelet/pods/db5cf574-1e7f-4bbd-8d06-b890e82bae03/volumes" Feb 02 17:29:54 crc kubenswrapper[4858]: I0202 17:29:54.415782 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff5d9e29-80ad-4627-ba54-0fd7bf4e84de" path="/var/lib/kubelet/pods/ff5d9e29-80ad-4627-ba54-0fd7bf4e84de/volumes" Feb 02 17:29:56 crc kubenswrapper[4858]: I0202 17:29:56.034296 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-4sv2z" Feb 02 17:29:56 crc kubenswrapper[4858]: I0202 17:29:56.094198 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-s5lks"] Feb 02 17:29:56 crc kubenswrapper[4858]: I0202 17:29:56.095410 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-s5lks" podUID="5a4f5119-0d30-4fa2-87c3-55aa74010bec" containerName="dnsmasq-dns" containerID="cri-o://03792f22e2924c5fd2570f11952874aa6e91ea74c86c14837ffd21c900e3e1d9" gracePeriod=10 Feb 02 17:29:56 crc kubenswrapper[4858]: I0202 17:29:56.531513 4858 generic.go:334] "Generic (PLEG): container finished" podID="5a4f5119-0d30-4fa2-87c3-55aa74010bec" containerID="03792f22e2924c5fd2570f11952874aa6e91ea74c86c14837ffd21c900e3e1d9" exitCode=0 Feb 02 17:29:56 crc kubenswrapper[4858]: I0202 17:29:56.531659 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-s5lks" event={"ID":"5a4f5119-0d30-4fa2-87c3-55aa74010bec","Type":"ContainerDied","Data":"03792f22e2924c5fd2570f11952874aa6e91ea74c86c14837ffd21c900e3e1d9"} Feb 02 17:29:58 crc kubenswrapper[4858]: I0202 17:29:58.337641 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-s5lks" Feb 02 17:29:58 crc kubenswrapper[4858]: I0202 17:29:58.442077 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a4f5119-0d30-4fa2-87c3-55aa74010bec-dns-svc\") pod \"5a4f5119-0d30-4fa2-87c3-55aa74010bec\" (UID: \"5a4f5119-0d30-4fa2-87c3-55aa74010bec\") " Feb 02 17:29:58 crc kubenswrapper[4858]: I0202 17:29:58.442219 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a4f5119-0d30-4fa2-87c3-55aa74010bec-config\") pod \"5a4f5119-0d30-4fa2-87c3-55aa74010bec\" (UID: \"5a4f5119-0d30-4fa2-87c3-55aa74010bec\") " Feb 02 17:29:58 crc kubenswrapper[4858]: I0202 17:29:58.442258 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qcvt\" (UniqueName: \"kubernetes.io/projected/5a4f5119-0d30-4fa2-87c3-55aa74010bec-kube-api-access-7qcvt\") pod \"5a4f5119-0d30-4fa2-87c3-55aa74010bec\" (UID: \"5a4f5119-0d30-4fa2-87c3-55aa74010bec\") " Feb 02 17:29:58 crc kubenswrapper[4858]: I0202 17:29:58.446946 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a4f5119-0d30-4fa2-87c3-55aa74010bec-kube-api-access-7qcvt" (OuterVolumeSpecName: "kube-api-access-7qcvt") pod "5a4f5119-0d30-4fa2-87c3-55aa74010bec" (UID: "5a4f5119-0d30-4fa2-87c3-55aa74010bec"). InnerVolumeSpecName "kube-api-access-7qcvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:29:58 crc kubenswrapper[4858]: I0202 17:29:58.480009 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a4f5119-0d30-4fa2-87c3-55aa74010bec-config" (OuterVolumeSpecName: "config") pod "5a4f5119-0d30-4fa2-87c3-55aa74010bec" (UID: "5a4f5119-0d30-4fa2-87c3-55aa74010bec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:29:58 crc kubenswrapper[4858]: I0202 17:29:58.488010 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a4f5119-0d30-4fa2-87c3-55aa74010bec-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5a4f5119-0d30-4fa2-87c3-55aa74010bec" (UID: "5a4f5119-0d30-4fa2-87c3-55aa74010bec"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:29:58 crc kubenswrapper[4858]: I0202 17:29:58.544546 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a4f5119-0d30-4fa2-87c3-55aa74010bec-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:29:58 crc kubenswrapper[4858]: I0202 17:29:58.544581 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qcvt\" (UniqueName: \"kubernetes.io/projected/5a4f5119-0d30-4fa2-87c3-55aa74010bec-kube-api-access-7qcvt\") on node \"crc\" DevicePath \"\"" Feb 02 17:29:58 crc kubenswrapper[4858]: I0202 17:29:58.544594 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a4f5119-0d30-4fa2-87c3-55aa74010bec-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 17:29:58 crc kubenswrapper[4858]: I0202 17:29:58.550312 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-s5lks" event={"ID":"5a4f5119-0d30-4fa2-87c3-55aa74010bec","Type":"ContainerDied","Data":"98d16a965008f799a96fc98af47f6f82503cd38e2ba57d316a02f59681851cd5"} Feb 02 17:29:58 crc kubenswrapper[4858]: I0202 17:29:58.550377 4858 scope.go:117] "RemoveContainer" containerID="03792f22e2924c5fd2570f11952874aa6e91ea74c86c14837ffd21c900e3e1d9" Feb 02 17:29:58 crc kubenswrapper[4858]: I0202 17:29:58.550539 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-s5lks" Feb 02 17:29:58 crc kubenswrapper[4858]: I0202 17:29:58.593106 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-s5lks"] Feb 02 17:29:58 crc kubenswrapper[4858]: I0202 17:29:58.597807 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-s5lks"] Feb 02 17:29:58 crc kubenswrapper[4858]: I0202 17:29:58.810044 4858 scope.go:117] "RemoveContainer" containerID="2fa291207764f806d8bcc45453915e4a1286eebfc3704cbe6dbd0f877125ed69" Feb 02 17:30:00 crc kubenswrapper[4858]: I0202 17:30:00.167751 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500890-w9tc9"] Feb 02 17:30:00 crc kubenswrapper[4858]: E0202 17:30:00.168650 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a4f5119-0d30-4fa2-87c3-55aa74010bec" containerName="dnsmasq-dns" Feb 02 17:30:00 crc kubenswrapper[4858]: I0202 17:30:00.168671 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a4f5119-0d30-4fa2-87c3-55aa74010bec" containerName="dnsmasq-dns" Feb 02 17:30:00 crc kubenswrapper[4858]: E0202 17:30:00.168706 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a4f5119-0d30-4fa2-87c3-55aa74010bec" containerName="init" Feb 02 17:30:00 crc kubenswrapper[4858]: I0202 17:30:00.168717 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a4f5119-0d30-4fa2-87c3-55aa74010bec" containerName="init" Feb 02 17:30:00 crc kubenswrapper[4858]: I0202 17:30:00.169014 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a4f5119-0d30-4fa2-87c3-55aa74010bec" containerName="dnsmasq-dns" Feb 02 17:30:00 crc kubenswrapper[4858]: I0202 17:30:00.170150 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500890-w9tc9" Feb 02 17:30:00 crc kubenswrapper[4858]: I0202 17:30:00.172717 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 17:30:00 crc kubenswrapper[4858]: I0202 17:30:00.172876 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 17:30:00 crc kubenswrapper[4858]: I0202 17:30:00.183765 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500890-w9tc9"] Feb 02 17:30:00 crc kubenswrapper[4858]: I0202 17:30:00.269581 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/529cc58f-54e5-420c-8278-4e015207275f-secret-volume\") pod \"collect-profiles-29500890-w9tc9\" (UID: \"529cc58f-54e5-420c-8278-4e015207275f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500890-w9tc9" Feb 02 17:30:00 crc kubenswrapper[4858]: I0202 17:30:00.269759 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/529cc58f-54e5-420c-8278-4e015207275f-config-volume\") pod \"collect-profiles-29500890-w9tc9\" (UID: \"529cc58f-54e5-420c-8278-4e015207275f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500890-w9tc9" Feb 02 17:30:00 crc kubenswrapper[4858]: I0202 17:30:00.269879 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7zx9\" (UniqueName: \"kubernetes.io/projected/529cc58f-54e5-420c-8278-4e015207275f-kube-api-access-n7zx9\") pod \"collect-profiles-29500890-w9tc9\" (UID: \"529cc58f-54e5-420c-8278-4e015207275f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500890-w9tc9" Feb 02 17:30:00 crc kubenswrapper[4858]: I0202 17:30:00.370843 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7zx9\" (UniqueName: \"kubernetes.io/projected/529cc58f-54e5-420c-8278-4e015207275f-kube-api-access-n7zx9\") pod \"collect-profiles-29500890-w9tc9\" (UID: \"529cc58f-54e5-420c-8278-4e015207275f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500890-w9tc9" Feb 02 17:30:00 crc kubenswrapper[4858]: I0202 17:30:00.370937 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/529cc58f-54e5-420c-8278-4e015207275f-secret-volume\") pod \"collect-profiles-29500890-w9tc9\" (UID: \"529cc58f-54e5-420c-8278-4e015207275f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500890-w9tc9" Feb 02 17:30:00 crc kubenswrapper[4858]: I0202 17:30:00.371022 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/529cc58f-54e5-420c-8278-4e015207275f-config-volume\") pod \"collect-profiles-29500890-w9tc9\" (UID: \"529cc58f-54e5-420c-8278-4e015207275f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500890-w9tc9" Feb 02 17:30:00 crc kubenswrapper[4858]: I0202 17:30:00.376034 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 17:30:00 crc kubenswrapper[4858]: I0202 17:30:00.384366 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/529cc58f-54e5-420c-8278-4e015207275f-config-volume\") pod \"collect-profiles-29500890-w9tc9\" (UID: \"529cc58f-54e5-420c-8278-4e015207275f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500890-w9tc9" Feb 02 17:30:00 crc kubenswrapper[4858]: I0202 17:30:00.391308 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/529cc58f-54e5-420c-8278-4e015207275f-secret-volume\") pod \"collect-profiles-29500890-w9tc9\" (UID: \"529cc58f-54e5-420c-8278-4e015207275f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500890-w9tc9" Feb 02 17:30:00 crc kubenswrapper[4858]: I0202 17:30:00.391672 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7zx9\" (UniqueName: \"kubernetes.io/projected/529cc58f-54e5-420c-8278-4e015207275f-kube-api-access-n7zx9\") pod \"collect-profiles-29500890-w9tc9\" (UID: \"529cc58f-54e5-420c-8278-4e015207275f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500890-w9tc9" Feb 02 17:30:00 crc kubenswrapper[4858]: I0202 17:30:00.422005 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a4f5119-0d30-4fa2-87c3-55aa74010bec" path="/var/lib/kubelet/pods/5a4f5119-0d30-4fa2-87c3-55aa74010bec/volumes" Feb 02 17:30:00 crc kubenswrapper[4858]: I0202 17:30:00.512656 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 17:30:00 crc kubenswrapper[4858]: I0202 17:30:00.515849 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500890-w9tc9" Feb 02 17:30:00 crc kubenswrapper[4858]: I0202 17:30:00.610829 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"55d221f1-91f9-4045-b94b-95facb25b3dc","Type":"ContainerStarted","Data":"7aadaa269dc736d732bdf76758c3351ee91ef7f1b2b6fb59c37adeb68100c533"} Feb 02 17:30:00 crc kubenswrapper[4858]: I0202 17:30:00.612725 4858 generic.go:334] "Generic (PLEG): container finished" podID="77df6a52-36fd-44ea-b30e-33041ed49ed6" containerID="848a021c5fba2014706928d5e28d600ef8f7c2d8ea81a2e12ce39abdad75cb22" exitCode=0 Feb 02 17:30:00 crc kubenswrapper[4858]: I0202 17:30:00.612770 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tc4gv" event={"ID":"77df6a52-36fd-44ea-b30e-33041ed49ed6","Type":"ContainerDied","Data":"848a021c5fba2014706928d5e28d600ef8f7c2d8ea81a2e12ce39abdad75cb22"} Feb 02 17:30:00 crc kubenswrapper[4858]: I0202 17:30:00.670880 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a62694a3-fa2d-4765-ac02-3d19c4779d21","Type":"ContainerStarted","Data":"3cbb8fa53ec265e368b0c21843b2668bda54fdf612d9bb10aab3852fca2880fc"} Feb 02 17:30:00 crc kubenswrapper[4858]: I0202 17:30:00.685289 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"10f1d4cf-2e13-41b0-b29a-f889e2acf0d0","Type":"ContainerStarted","Data":"68ae792d541ee17fc948bb64c68afd9604157edf69fdd91e7eda2c45b8368507"} Feb 02 17:30:00 crc kubenswrapper[4858]: I0202 17:30:00.695071 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"59a6029b-5965-40a3-9dbd-0b4784340ce0","Type":"ContainerStarted","Data":"14e71f14a7bb05f3ce4f0fc3b81925ecdce5f94da39722236c53b6c8dadc5526"} Feb 02 17:30:00 crc kubenswrapper[4858]: I0202 17:30:00.707021 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-h6kmt" event={"ID":"334dab9b-9793-4424-9c39-27eac5f07626","Type":"ContainerStarted","Data":"48c3d2053275908299d4d2ea4c448a4a8e6704c7c8f636762f5a2bd25cf39215"} Feb 02 17:30:00 crc kubenswrapper[4858]: I0202 17:30:00.707422 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-h6kmt" Feb 02 17:30:00 crc kubenswrapper[4858]: I0202 17:30:00.709343 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"c386da2d-4b55-47da-aa8c-82b879ae7d3d","Type":"ContainerStarted","Data":"03a7b1b5ee42926d820a0de677722c10297ee8314712951307aaec1017083298"} Feb 02 17:30:00 crc kubenswrapper[4858]: I0202 17:30:00.709600 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 02 17:30:00 crc kubenswrapper[4858]: I0202 17:30:00.712734 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e","Type":"ContainerStarted","Data":"ac808642e58db1d8622822bc03c9ea33e8538cb4f029c9db6a85997ead44db3c"} Feb 02 17:30:00 crc kubenswrapper[4858]: I0202 17:30:00.715158 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3a24f351-b5a8-444d-b67d-7b9635f5a8aa","Type":"ContainerStarted","Data":"a65920c03b3da27ae5b1c79a721059c26749a2a69aae5c6b662dac848117f967"} Feb 02 17:30:00 crc kubenswrapper[4858]: I0202 17:30:00.723989 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"8a3a3fdc-3021-44f0-8520-da5a88cf03e1","Type":"ContainerStarted","Data":"ddd9cf79f46f14404ae7a070e1ac5605911d89aa1921e83eb4dc910d77961bf5"} Feb 02 17:30:00 crc kubenswrapper[4858]: I0202 17:30:00.769215 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-h6kmt" podStartSLOduration=9.586305342 podStartE2EDuration="16.769197188s" podCreationTimestamp="2026-02-02 17:29:44 +0000 UTC" firstStartedPulling="2026-02-02 17:29:51.61554511 +0000 UTC m=+892.767960375" lastFinishedPulling="2026-02-02 17:29:58.798436936 +0000 UTC m=+899.950852221" observedRunningTime="2026-02-02 17:30:00.762244132 +0000 UTC m=+901.914659397" watchObservedRunningTime="2026-02-02 17:30:00.769197188 +0000 UTC m=+901.921612443" Feb 02 17:30:00 crc kubenswrapper[4858]: I0202 17:30:00.873941 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=10.923143185 podStartE2EDuration="18.8739198s" podCreationTimestamp="2026-02-02 17:29:42 +0000 UTC" firstStartedPulling="2026-02-02 17:29:51.592159951 +0000 UTC m=+892.744575216" lastFinishedPulling="2026-02-02 17:29:59.542936566 +0000 UTC m=+900.695351831" observedRunningTime="2026-02-02 17:30:00.866607374 +0000 UTC m=+902.019022639" watchObservedRunningTime="2026-02-02 17:30:00.8739198 +0000 UTC m=+902.026335055" Feb 02 17:30:00 crc kubenswrapper[4858]: I0202 17:30:00.950778 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=14.635580611 podStartE2EDuration="21.950758186s" podCreationTimestamp="2026-02-02 17:29:39 +0000 UTC" firstStartedPulling="2026-02-02 17:29:51.302061332 +0000 UTC m=+892.454476597" lastFinishedPulling="2026-02-02 17:29:58.617238907 +0000 UTC m=+899.769654172" observedRunningTime="2026-02-02 17:30:00.939004685 +0000 UTC m=+902.091419970" watchObservedRunningTime="2026-02-02 17:30:00.950758186 +0000 UTC m=+902.103173451" Feb 02 17:30:01 crc kubenswrapper[4858]: I0202 17:30:01.149558 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500890-w9tc9"] Feb 02 17:30:01 crc kubenswrapper[4858]: W0202 17:30:01.466026 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod529cc58f_54e5_420c_8278_4e015207275f.slice/crio-38c18e11a8c597829eae3ea46b19c9cb53ee64dd7d36799e36088310fec235be WatchSource:0}: Error finding container 38c18e11a8c597829eae3ea46b19c9cb53ee64dd7d36799e36088310fec235be: Status 404 returned error can't find the container with id 38c18e11a8c597829eae3ea46b19c9cb53ee64dd7d36799e36088310fec235be Feb 02 17:30:01 crc kubenswrapper[4858]: I0202 17:30:01.736760 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tc4gv" event={"ID":"77df6a52-36fd-44ea-b30e-33041ed49ed6","Type":"ContainerStarted","Data":"bfbc474af9d4ed3d64349327212146dbc1eeb904eadfe36dec60422e6352bd1c"} Feb 02 17:30:01 crc kubenswrapper[4858]: I0202 17:30:01.736836 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tc4gv" event={"ID":"77df6a52-36fd-44ea-b30e-33041ed49ed6","Type":"ContainerStarted","Data":"3b9ad140a1b6bd05b9cd3ca1ef27fc7132a4cd5e2b2df8c537473a5941164cf9"} Feb 02 17:30:01 crc kubenswrapper[4858]: I0202 17:30:01.737742 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-tc4gv" Feb 02 17:30:01 crc kubenswrapper[4858]: I0202 17:30:01.737807 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-tc4gv" Feb 02 17:30:01 crc kubenswrapper[4858]: I0202 17:30:01.739629 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500890-w9tc9" event={"ID":"529cc58f-54e5-420c-8278-4e015207275f","Type":"ContainerStarted","Data":"38c18e11a8c597829eae3ea46b19c9cb53ee64dd7d36799e36088310fec235be"} Feb 02 17:30:01 crc kubenswrapper[4858]: I0202 17:30:01.739699 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 02 17:30:01 crc kubenswrapper[4858]: I0202 17:30:01.757414 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-tc4gv" podStartSLOduration=12.226782304 podStartE2EDuration="17.757389877s" podCreationTimestamp="2026-02-02 17:29:44 +0000 UTC" firstStartedPulling="2026-02-02 17:29:53.213167932 +0000 UTC m=+894.365583197" lastFinishedPulling="2026-02-02 17:29:58.743775475 +0000 UTC m=+899.896190770" observedRunningTime="2026-02-02 17:30:01.755049681 +0000 UTC m=+902.907464946" watchObservedRunningTime="2026-02-02 17:30:01.757389877 +0000 UTC m=+902.909805142" Feb 02 17:30:02 crc kubenswrapper[4858]: I0202 17:30:02.756432 4858 generic.go:334] "Generic (PLEG): container finished" podID="529cc58f-54e5-420c-8278-4e015207275f" containerID="9bbf99522e64265ffc60b34c05d71d87b7ada7cd9b7fc79f7ce7a59c6065478c" exitCode=0 Feb 02 17:30:02 crc kubenswrapper[4858]: I0202 17:30:02.757041 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500890-w9tc9" event={"ID":"529cc58f-54e5-420c-8278-4e015207275f","Type":"ContainerDied","Data":"9bbf99522e64265ffc60b34c05d71d87b7ada7cd9b7fc79f7ce7a59c6065478c"} Feb 02 17:30:02 crc kubenswrapper[4858]: I0202 17:30:02.761181 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a62694a3-fa2d-4765-ac02-3d19c4779d21","Type":"ContainerStarted","Data":"d2f81996514cba97a356c4cd5fdefa5ddd805c007798e84926e1299a19e9bf3d"} Feb 02 17:30:02 crc kubenswrapper[4858]: I0202 17:30:02.765784 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"10f1d4cf-2e13-41b0-b29a-f889e2acf0d0","Type":"ContainerStarted","Data":"678a75ba25572409639d63a8fda2cffea5844a8813fb83bcd99667f0c24509d9"} Feb 02 17:30:02 crc kubenswrapper[4858]: I0202 17:30:02.831467 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=6.492542666 podStartE2EDuration="15.831442877s" podCreationTimestamp="2026-02-02 17:29:47 +0000 UTC" firstStartedPulling="2026-02-02 17:29:52.33595143 +0000 UTC m=+893.488366695" lastFinishedPulling="2026-02-02 17:30:01.674851641 +0000 UTC m=+902.827266906" observedRunningTime="2026-02-02 17:30:02.824773519 +0000 UTC m=+903.977188794" watchObservedRunningTime="2026-02-02 17:30:02.831442877 +0000 UTC m=+903.983858162" Feb 02 17:30:02 crc kubenswrapper[4858]: I0202 17:30:02.857790 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=8.875767168 podStartE2EDuration="18.857761209s" podCreationTimestamp="2026-02-02 17:29:44 +0000 UTC" firstStartedPulling="2026-02-02 17:29:51.687117428 +0000 UTC m=+892.839532693" lastFinishedPulling="2026-02-02 17:30:01.669111469 +0000 UTC m=+902.821526734" observedRunningTime="2026-02-02 17:30:02.851718789 +0000 UTC m=+904.004134054" watchObservedRunningTime="2026-02-02 17:30:02.857761209 +0000 UTC m=+904.010176484" Feb 02 17:30:03 crc kubenswrapper[4858]: I0202 17:30:03.773823 4858 generic.go:334] "Generic (PLEG): container finished" podID="3a24f351-b5a8-444d-b67d-7b9635f5a8aa" containerID="a65920c03b3da27ae5b1c79a721059c26749a2a69aae5c6b662dac848117f967" exitCode=0 Feb 02 17:30:03 crc kubenswrapper[4858]: I0202 17:30:03.773907 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3a24f351-b5a8-444d-b67d-7b9635f5a8aa","Type":"ContainerDied","Data":"a65920c03b3da27ae5b1c79a721059c26749a2a69aae5c6b662dac848117f967"} Feb 02 17:30:03 crc kubenswrapper[4858]: I0202 17:30:03.775727 4858 generic.go:334] "Generic (PLEG): container finished" podID="8a3a3fdc-3021-44f0-8520-da5a88cf03e1" containerID="ddd9cf79f46f14404ae7a070e1ac5605911d89aa1921e83eb4dc910d77961bf5" exitCode=0 Feb 02 17:30:03 crc kubenswrapper[4858]: I0202 17:30:03.775782 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"8a3a3fdc-3021-44f0-8520-da5a88cf03e1","Type":"ContainerDied","Data":"ddd9cf79f46f14404ae7a070e1ac5605911d89aa1921e83eb4dc910d77961bf5"} Feb 02 17:30:04 crc kubenswrapper[4858]: I0202 17:30:04.045499 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500890-w9tc9" Feb 02 17:30:04 crc kubenswrapper[4858]: I0202 17:30:04.060952 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 02 17:30:04 crc kubenswrapper[4858]: I0202 17:30:04.106360 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 02 17:30:04 crc kubenswrapper[4858]: I0202 17:30:04.150888 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7zx9\" (UniqueName: \"kubernetes.io/projected/529cc58f-54e5-420c-8278-4e015207275f-kube-api-access-n7zx9\") pod \"529cc58f-54e5-420c-8278-4e015207275f\" (UID: \"529cc58f-54e5-420c-8278-4e015207275f\") " Feb 02 17:30:04 crc kubenswrapper[4858]: I0202 17:30:04.151384 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/529cc58f-54e5-420c-8278-4e015207275f-config-volume\") pod \"529cc58f-54e5-420c-8278-4e015207275f\" (UID: \"529cc58f-54e5-420c-8278-4e015207275f\") " Feb 02 17:30:04 crc kubenswrapper[4858]: I0202 17:30:04.151480 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/529cc58f-54e5-420c-8278-4e015207275f-secret-volume\") pod \"529cc58f-54e5-420c-8278-4e015207275f\" (UID: \"529cc58f-54e5-420c-8278-4e015207275f\") " Feb 02 17:30:04 crc kubenswrapper[4858]: I0202 17:30:04.153209 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/529cc58f-54e5-420c-8278-4e015207275f-config-volume" (OuterVolumeSpecName: "config-volume") pod "529cc58f-54e5-420c-8278-4e015207275f" (UID: "529cc58f-54e5-420c-8278-4e015207275f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:30:04 crc kubenswrapper[4858]: I0202 17:30:04.155192 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/529cc58f-54e5-420c-8278-4e015207275f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "529cc58f-54e5-420c-8278-4e015207275f" (UID: "529cc58f-54e5-420c-8278-4e015207275f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:30:04 crc kubenswrapper[4858]: I0202 17:30:04.155714 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/529cc58f-54e5-420c-8278-4e015207275f-kube-api-access-n7zx9" (OuterVolumeSpecName: "kube-api-access-n7zx9") pod "529cc58f-54e5-420c-8278-4e015207275f" (UID: "529cc58f-54e5-420c-8278-4e015207275f"). InnerVolumeSpecName "kube-api-access-n7zx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:30:04 crc kubenswrapper[4858]: I0202 17:30:04.252826 4858 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/529cc58f-54e5-420c-8278-4e015207275f-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:04 crc kubenswrapper[4858]: I0202 17:30:04.252865 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7zx9\" (UniqueName: \"kubernetes.io/projected/529cc58f-54e5-420c-8278-4e015207275f-kube-api-access-n7zx9\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:04 crc kubenswrapper[4858]: I0202 17:30:04.252880 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/529cc58f-54e5-420c-8278-4e015207275f-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:04 crc kubenswrapper[4858]: I0202 17:30:04.341912 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 02 17:30:04 crc kubenswrapper[4858]: I0202 17:30:04.342004 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 02 17:30:04 crc kubenswrapper[4858]: I0202 17:30:04.376514 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 02 17:30:04 crc kubenswrapper[4858]: I0202 17:30:04.789835 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500890-w9tc9" event={"ID":"529cc58f-54e5-420c-8278-4e015207275f","Type":"ContainerDied","Data":"38c18e11a8c597829eae3ea46b19c9cb53ee64dd7d36799e36088310fec235be"} Feb 02 17:30:04 crc kubenswrapper[4858]: I0202 17:30:04.789874 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500890-w9tc9" Feb 02 17:30:04 crc kubenswrapper[4858]: I0202 17:30:04.789882 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38c18e11a8c597829eae3ea46b19c9cb53ee64dd7d36799e36088310fec235be" Feb 02 17:30:04 crc kubenswrapper[4858]: I0202 17:30:04.792384 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3a24f351-b5a8-444d-b67d-7b9635f5a8aa","Type":"ContainerStarted","Data":"38f18a1e7af6202d0501fd88ea225e8dd4a5bee53aa9dae4e17adcadc82fa66e"} Feb 02 17:30:04 crc kubenswrapper[4858]: I0202 17:30:04.797187 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"8a3a3fdc-3021-44f0-8520-da5a88cf03e1","Type":"ContainerStarted","Data":"c9e98aa238fabfde33785d7df857fb10fd411a81c3aea5754cbfb77dc769b848"} Feb 02 17:30:04 crc kubenswrapper[4858]: I0202 17:30:04.798183 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 02 17:30:04 crc kubenswrapper[4858]: I0202 17:30:04.831717 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=21.692463455 podStartE2EDuration="28.83167816s" podCreationTimestamp="2026-02-02 17:29:36 +0000 UTC" firstStartedPulling="2026-02-02 17:29:51.605546198 +0000 UTC m=+892.757961463" lastFinishedPulling="2026-02-02 17:29:58.744760863 +0000 UTC m=+899.897176168" observedRunningTime="2026-02-02 17:30:04.829149849 +0000 UTC m=+905.981565124" watchObservedRunningTime="2026-02-02 17:30:04.83167816 +0000 UTC m=+905.984093425" Feb 02 17:30:04 crc kubenswrapper[4858]: I0202 17:30:04.846818 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 02 17:30:04 crc kubenswrapper[4858]: I0202 17:30:04.856100 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 02 17:30:04 crc kubenswrapper[4858]: I0202 17:30:04.858567 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=19.697821187 podStartE2EDuration="26.858546478s" podCreationTimestamp="2026-02-02 17:29:38 +0000 UTC" firstStartedPulling="2026-02-02 17:29:51.455835207 +0000 UTC m=+892.608250472" lastFinishedPulling="2026-02-02 17:29:58.616560498 +0000 UTC m=+899.768975763" observedRunningTime="2026-02-02 17:30:04.851315054 +0000 UTC m=+906.003730339" watchObservedRunningTime="2026-02-02 17:30:04.858546478 +0000 UTC m=+906.010961743" Feb 02 17:30:04 crc kubenswrapper[4858]: I0202 17:30:04.992117 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.173138 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-g5d9v"] Feb 02 17:30:05 crc kubenswrapper[4858]: E0202 17:30:05.173551 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="529cc58f-54e5-420c-8278-4e015207275f" containerName="collect-profiles" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.173571 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="529cc58f-54e5-420c-8278-4e015207275f" containerName="collect-profiles" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.173749 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="529cc58f-54e5-420c-8278-4e015207275f" containerName="collect-profiles" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.174399 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-g5d9v" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.183573 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.184051 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-g5d9v"] Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.214466 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-cqzdh"] Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.215957 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-cqzdh" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.217630 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.247717 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-cqzdh"] Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.273448 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de17af80-1849-4a19-ae89-50057bc76aa3-config\") pod \"ovn-controller-metrics-g5d9v\" (UID: \"de17af80-1849-4a19-ae89-50057bc76aa3\") " pod="openstack/ovn-controller-metrics-g5d9v" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.273500 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/318c8faf-9486-4f68-8fde-058544f1315b-config\") pod \"dnsmasq-dns-7fd796d7df-cqzdh\" (UID: \"318c8faf-9486-4f68-8fde-058544f1315b\") " pod="openstack/dnsmasq-dns-7fd796d7df-cqzdh" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.273526 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/de17af80-1849-4a19-ae89-50057bc76aa3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-g5d9v\" (UID: \"de17af80-1849-4a19-ae89-50057bc76aa3\") " pod="openstack/ovn-controller-metrics-g5d9v" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.273568 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/318c8faf-9486-4f68-8fde-058544f1315b-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-cqzdh\" (UID: \"318c8faf-9486-4f68-8fde-058544f1315b\") " pod="openstack/dnsmasq-dns-7fd796d7df-cqzdh" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.273591 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/de17af80-1849-4a19-ae89-50057bc76aa3-ovs-rundir\") pod \"ovn-controller-metrics-g5d9v\" (UID: \"de17af80-1849-4a19-ae89-50057bc76aa3\") " pod="openstack/ovn-controller-metrics-g5d9v" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.273625 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/de17af80-1849-4a19-ae89-50057bc76aa3-ovn-rundir\") pod \"ovn-controller-metrics-g5d9v\" (UID: \"de17af80-1849-4a19-ae89-50057bc76aa3\") " pod="openstack/ovn-controller-metrics-g5d9v" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.273647 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnmk4\" (UniqueName: \"kubernetes.io/projected/de17af80-1849-4a19-ae89-50057bc76aa3-kube-api-access-bnmk4\") pod \"ovn-controller-metrics-g5d9v\" (UID: \"de17af80-1849-4a19-ae89-50057bc76aa3\") " pod="openstack/ovn-controller-metrics-g5d9v" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.273671 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de17af80-1849-4a19-ae89-50057bc76aa3-combined-ca-bundle\") pod \"ovn-controller-metrics-g5d9v\" (UID: \"de17af80-1849-4a19-ae89-50057bc76aa3\") " pod="openstack/ovn-controller-metrics-g5d9v" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.273692 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmsdb\" (UniqueName: \"kubernetes.io/projected/318c8faf-9486-4f68-8fde-058544f1315b-kube-api-access-vmsdb\") pod \"dnsmasq-dns-7fd796d7df-cqzdh\" (UID: \"318c8faf-9486-4f68-8fde-058544f1315b\") " pod="openstack/dnsmasq-dns-7fd796d7df-cqzdh" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.273712 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/318c8faf-9486-4f68-8fde-058544f1315b-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-cqzdh\" (UID: \"318c8faf-9486-4f68-8fde-058544f1315b\") " pod="openstack/dnsmasq-dns-7fd796d7df-cqzdh" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.338900 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.340346 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.342591 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-w7wf8" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.343773 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.344245 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.344517 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.350702 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-cqzdh"] Feb 02 17:30:05 crc kubenswrapper[4858]: E0202 17:30:05.351792 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-vmsdb ovsdbserver-nb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-7fd796d7df-cqzdh" podUID="318c8faf-9486-4f68-8fde-058544f1315b" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.357607 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.375272 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c6b95f0-73a1-4b25-9905-2fa224e52142-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3c6b95f0-73a1-4b25-9905-2fa224e52142\") " pod="openstack/ovn-northd-0" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.375525 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv2f8\" (UniqueName: \"kubernetes.io/projected/3c6b95f0-73a1-4b25-9905-2fa224e52142-kube-api-access-vv2f8\") pod \"ovn-northd-0\" (UID: \"3c6b95f0-73a1-4b25-9905-2fa224e52142\") " pod="openstack/ovn-northd-0" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.375624 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de17af80-1849-4a19-ae89-50057bc76aa3-config\") pod \"ovn-controller-metrics-g5d9v\" (UID: \"de17af80-1849-4a19-ae89-50057bc76aa3\") " pod="openstack/ovn-controller-metrics-g5d9v" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.375703 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/318c8faf-9486-4f68-8fde-058544f1315b-config\") pod \"dnsmasq-dns-7fd796d7df-cqzdh\" (UID: \"318c8faf-9486-4f68-8fde-058544f1315b\") " pod="openstack/dnsmasq-dns-7fd796d7df-cqzdh" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.375779 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/de17af80-1849-4a19-ae89-50057bc76aa3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-g5d9v\" (UID: \"de17af80-1849-4a19-ae89-50057bc76aa3\") " pod="openstack/ovn-controller-metrics-g5d9v" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.375846 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3c6b95f0-73a1-4b25-9905-2fa224e52142-scripts\") pod \"ovn-northd-0\" (UID: \"3c6b95f0-73a1-4b25-9905-2fa224e52142\") " pod="openstack/ovn-northd-0" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.375924 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c6b95f0-73a1-4b25-9905-2fa224e52142-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3c6b95f0-73a1-4b25-9905-2fa224e52142\") " pod="openstack/ovn-northd-0" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.376021 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/318c8faf-9486-4f68-8fde-058544f1315b-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-cqzdh\" (UID: \"318c8faf-9486-4f68-8fde-058544f1315b\") " pod="openstack/dnsmasq-dns-7fd796d7df-cqzdh" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.376105 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/de17af80-1849-4a19-ae89-50057bc76aa3-ovs-rundir\") pod \"ovn-controller-metrics-g5d9v\" (UID: \"de17af80-1849-4a19-ae89-50057bc76aa3\") " pod="openstack/ovn-controller-metrics-g5d9v" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.376169 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c6b95f0-73a1-4b25-9905-2fa224e52142-config\") pod \"ovn-northd-0\" (UID: \"3c6b95f0-73a1-4b25-9905-2fa224e52142\") " pod="openstack/ovn-northd-0" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.376236 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c6b95f0-73a1-4b25-9905-2fa224e52142-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3c6b95f0-73a1-4b25-9905-2fa224e52142\") " pod="openstack/ovn-northd-0" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.376322 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3c6b95f0-73a1-4b25-9905-2fa224e52142-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3c6b95f0-73a1-4b25-9905-2fa224e52142\") " pod="openstack/ovn-northd-0" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.376394 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/de17af80-1849-4a19-ae89-50057bc76aa3-ovn-rundir\") pod \"ovn-controller-metrics-g5d9v\" (UID: \"de17af80-1849-4a19-ae89-50057bc76aa3\") " pod="openstack/ovn-controller-metrics-g5d9v" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.376469 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnmk4\" (UniqueName: \"kubernetes.io/projected/de17af80-1849-4a19-ae89-50057bc76aa3-kube-api-access-bnmk4\") pod \"ovn-controller-metrics-g5d9v\" (UID: \"de17af80-1849-4a19-ae89-50057bc76aa3\") " pod="openstack/ovn-controller-metrics-g5d9v" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.376537 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de17af80-1849-4a19-ae89-50057bc76aa3-combined-ca-bundle\") pod \"ovn-controller-metrics-g5d9v\" (UID: \"de17af80-1849-4a19-ae89-50057bc76aa3\") " pod="openstack/ovn-controller-metrics-g5d9v" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.376606 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmsdb\" (UniqueName: \"kubernetes.io/projected/318c8faf-9486-4f68-8fde-058544f1315b-kube-api-access-vmsdb\") pod \"dnsmasq-dns-7fd796d7df-cqzdh\" (UID: \"318c8faf-9486-4f68-8fde-058544f1315b\") " pod="openstack/dnsmasq-dns-7fd796d7df-cqzdh" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.376676 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/318c8faf-9486-4f68-8fde-058544f1315b-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-cqzdh\" (UID: \"318c8faf-9486-4f68-8fde-058544f1315b\") " pod="openstack/dnsmasq-dns-7fd796d7df-cqzdh" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.376754 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/de17af80-1849-4a19-ae89-50057bc76aa3-ovs-rundir\") pod \"ovn-controller-metrics-g5d9v\" (UID: \"de17af80-1849-4a19-ae89-50057bc76aa3\") " pod="openstack/ovn-controller-metrics-g5d9v" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.376563 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/318c8faf-9486-4f68-8fde-058544f1315b-config\") pod \"dnsmasq-dns-7fd796d7df-cqzdh\" (UID: \"318c8faf-9486-4f68-8fde-058544f1315b\") " pod="openstack/dnsmasq-dns-7fd796d7df-cqzdh" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.376569 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de17af80-1849-4a19-ae89-50057bc76aa3-config\") pod \"ovn-controller-metrics-g5d9v\" (UID: \"de17af80-1849-4a19-ae89-50057bc76aa3\") " pod="openstack/ovn-controller-metrics-g5d9v" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.376894 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/de17af80-1849-4a19-ae89-50057bc76aa3-ovn-rundir\") pod \"ovn-controller-metrics-g5d9v\" (UID: \"de17af80-1849-4a19-ae89-50057bc76aa3\") " pod="openstack/ovn-controller-metrics-g5d9v" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.377646 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/318c8faf-9486-4f68-8fde-058544f1315b-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-cqzdh\" (UID: \"318c8faf-9486-4f68-8fde-058544f1315b\") " pod="openstack/dnsmasq-dns-7fd796d7df-cqzdh" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.377989 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/318c8faf-9486-4f68-8fde-058544f1315b-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-cqzdh\" (UID: \"318c8faf-9486-4f68-8fde-058544f1315b\") " pod="openstack/dnsmasq-dns-7fd796d7df-cqzdh" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.378187 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-lngkw"] Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.379614 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-lngkw" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.384265 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.385849 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de17af80-1849-4a19-ae89-50057bc76aa3-combined-ca-bundle\") pod \"ovn-controller-metrics-g5d9v\" (UID: \"de17af80-1849-4a19-ae89-50057bc76aa3\") " pod="openstack/ovn-controller-metrics-g5d9v" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.388446 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/de17af80-1849-4a19-ae89-50057bc76aa3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-g5d9v\" (UID: \"de17af80-1849-4a19-ae89-50057bc76aa3\") " pod="openstack/ovn-controller-metrics-g5d9v" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.390686 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-lngkw"] Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.397047 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmsdb\" (UniqueName: \"kubernetes.io/projected/318c8faf-9486-4f68-8fde-058544f1315b-kube-api-access-vmsdb\") pod \"dnsmasq-dns-7fd796d7df-cqzdh\" (UID: \"318c8faf-9486-4f68-8fde-058544f1315b\") " pod="openstack/dnsmasq-dns-7fd796d7df-cqzdh" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.400415 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnmk4\" (UniqueName: \"kubernetes.io/projected/de17af80-1849-4a19-ae89-50057bc76aa3-kube-api-access-bnmk4\") pod \"ovn-controller-metrics-g5d9v\" (UID: \"de17af80-1849-4a19-ae89-50057bc76aa3\") " pod="openstack/ovn-controller-metrics-g5d9v" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.478090 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c6b95f0-73a1-4b25-9905-2fa224e52142-config\") pod \"ovn-northd-0\" (UID: \"3c6b95f0-73a1-4b25-9905-2fa224e52142\") " pod="openstack/ovn-northd-0" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.478152 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdbcb8ee-7153-422f-9d1a-05e50dae3abd-config\") pod \"dnsmasq-dns-86db49b7ff-lngkw\" (UID: \"cdbcb8ee-7153-422f-9d1a-05e50dae3abd\") " pod="openstack/dnsmasq-dns-86db49b7ff-lngkw" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.478172 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c6b95f0-73a1-4b25-9905-2fa224e52142-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3c6b95f0-73a1-4b25-9905-2fa224e52142\") " pod="openstack/ovn-northd-0" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.478195 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3c6b95f0-73a1-4b25-9905-2fa224e52142-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3c6b95f0-73a1-4b25-9905-2fa224e52142\") " pod="openstack/ovn-northd-0" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.478265 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdbcb8ee-7153-422f-9d1a-05e50dae3abd-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-lngkw\" (UID: \"cdbcb8ee-7153-422f-9d1a-05e50dae3abd\") " pod="openstack/dnsmasq-dns-86db49b7ff-lngkw" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.478290 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdbcb8ee-7153-422f-9d1a-05e50dae3abd-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-lngkw\" (UID: \"cdbcb8ee-7153-422f-9d1a-05e50dae3abd\") " pod="openstack/dnsmasq-dns-86db49b7ff-lngkw" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.478313 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdbcb8ee-7153-422f-9d1a-05e50dae3abd-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-lngkw\" (UID: \"cdbcb8ee-7153-422f-9d1a-05e50dae3abd\") " pod="openstack/dnsmasq-dns-86db49b7ff-lngkw" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.478351 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c6b95f0-73a1-4b25-9905-2fa224e52142-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3c6b95f0-73a1-4b25-9905-2fa224e52142\") " pod="openstack/ovn-northd-0" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.478369 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vv2f8\" (UniqueName: \"kubernetes.io/projected/3c6b95f0-73a1-4b25-9905-2fa224e52142-kube-api-access-vv2f8\") pod \"ovn-northd-0\" (UID: \"3c6b95f0-73a1-4b25-9905-2fa224e52142\") " pod="openstack/ovn-northd-0" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.478409 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdk7w\" (UniqueName: \"kubernetes.io/projected/cdbcb8ee-7153-422f-9d1a-05e50dae3abd-kube-api-access-cdk7w\") pod \"dnsmasq-dns-86db49b7ff-lngkw\" (UID: \"cdbcb8ee-7153-422f-9d1a-05e50dae3abd\") " pod="openstack/dnsmasq-dns-86db49b7ff-lngkw" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.478428 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3c6b95f0-73a1-4b25-9905-2fa224e52142-scripts\") pod \"ovn-northd-0\" (UID: \"3c6b95f0-73a1-4b25-9905-2fa224e52142\") " pod="openstack/ovn-northd-0" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.478447 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c6b95f0-73a1-4b25-9905-2fa224e52142-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3c6b95f0-73a1-4b25-9905-2fa224e52142\") " pod="openstack/ovn-northd-0" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.479661 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c6b95f0-73a1-4b25-9905-2fa224e52142-config\") pod \"ovn-northd-0\" (UID: \"3c6b95f0-73a1-4b25-9905-2fa224e52142\") " pod="openstack/ovn-northd-0" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.479918 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3c6b95f0-73a1-4b25-9905-2fa224e52142-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3c6b95f0-73a1-4b25-9905-2fa224e52142\") " pod="openstack/ovn-northd-0" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.481063 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3c6b95f0-73a1-4b25-9905-2fa224e52142-scripts\") pod \"ovn-northd-0\" (UID: \"3c6b95f0-73a1-4b25-9905-2fa224e52142\") " pod="openstack/ovn-northd-0" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.482522 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c6b95f0-73a1-4b25-9905-2fa224e52142-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3c6b95f0-73a1-4b25-9905-2fa224e52142\") " pod="openstack/ovn-northd-0" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.483801 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c6b95f0-73a1-4b25-9905-2fa224e52142-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3c6b95f0-73a1-4b25-9905-2fa224e52142\") " pod="openstack/ovn-northd-0" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.488583 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c6b95f0-73a1-4b25-9905-2fa224e52142-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3c6b95f0-73a1-4b25-9905-2fa224e52142\") " pod="openstack/ovn-northd-0" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.496135 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv2f8\" (UniqueName: \"kubernetes.io/projected/3c6b95f0-73a1-4b25-9905-2fa224e52142-kube-api-access-vv2f8\") pod \"ovn-northd-0\" (UID: \"3c6b95f0-73a1-4b25-9905-2fa224e52142\") " pod="openstack/ovn-northd-0" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.496469 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-g5d9v" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.579527 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdbcb8ee-7153-422f-9d1a-05e50dae3abd-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-lngkw\" (UID: \"cdbcb8ee-7153-422f-9d1a-05e50dae3abd\") " pod="openstack/dnsmasq-dns-86db49b7ff-lngkw" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.579736 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdbcb8ee-7153-422f-9d1a-05e50dae3abd-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-lngkw\" (UID: \"cdbcb8ee-7153-422f-9d1a-05e50dae3abd\") " pod="openstack/dnsmasq-dns-86db49b7ff-lngkw" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.579755 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdbcb8ee-7153-422f-9d1a-05e50dae3abd-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-lngkw\" (UID: \"cdbcb8ee-7153-422f-9d1a-05e50dae3abd\") " pod="openstack/dnsmasq-dns-86db49b7ff-lngkw" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.579809 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdk7w\" (UniqueName: \"kubernetes.io/projected/cdbcb8ee-7153-422f-9d1a-05e50dae3abd-kube-api-access-cdk7w\") pod \"dnsmasq-dns-86db49b7ff-lngkw\" (UID: \"cdbcb8ee-7153-422f-9d1a-05e50dae3abd\") " pod="openstack/dnsmasq-dns-86db49b7ff-lngkw" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.579853 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdbcb8ee-7153-422f-9d1a-05e50dae3abd-config\") pod \"dnsmasq-dns-86db49b7ff-lngkw\" (UID: \"cdbcb8ee-7153-422f-9d1a-05e50dae3abd\") " pod="openstack/dnsmasq-dns-86db49b7ff-lngkw" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.580319 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdbcb8ee-7153-422f-9d1a-05e50dae3abd-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-lngkw\" (UID: \"cdbcb8ee-7153-422f-9d1a-05e50dae3abd\") " pod="openstack/dnsmasq-dns-86db49b7ff-lngkw" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.580884 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdbcb8ee-7153-422f-9d1a-05e50dae3abd-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-lngkw\" (UID: \"cdbcb8ee-7153-422f-9d1a-05e50dae3abd\") " pod="openstack/dnsmasq-dns-86db49b7ff-lngkw" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.581348 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdbcb8ee-7153-422f-9d1a-05e50dae3abd-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-lngkw\" (UID: \"cdbcb8ee-7153-422f-9d1a-05e50dae3abd\") " pod="openstack/dnsmasq-dns-86db49b7ff-lngkw" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.581576 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdbcb8ee-7153-422f-9d1a-05e50dae3abd-config\") pod \"dnsmasq-dns-86db49b7ff-lngkw\" (UID: \"cdbcb8ee-7153-422f-9d1a-05e50dae3abd\") " pod="openstack/dnsmasq-dns-86db49b7ff-lngkw" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.639337 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdk7w\" (UniqueName: \"kubernetes.io/projected/cdbcb8ee-7153-422f-9d1a-05e50dae3abd-kube-api-access-cdk7w\") pod \"dnsmasq-dns-86db49b7ff-lngkw\" (UID: \"cdbcb8ee-7153-422f-9d1a-05e50dae3abd\") " pod="openstack/dnsmasq-dns-86db49b7ff-lngkw" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.673331 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.761317 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-lngkw" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.803529 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-cqzdh" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.836664 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-cqzdh" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.884886 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/318c8faf-9486-4f68-8fde-058544f1315b-ovsdbserver-nb\") pod \"318c8faf-9486-4f68-8fde-058544f1315b\" (UID: \"318c8faf-9486-4f68-8fde-058544f1315b\") " Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.884966 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmsdb\" (UniqueName: \"kubernetes.io/projected/318c8faf-9486-4f68-8fde-058544f1315b-kube-api-access-vmsdb\") pod \"318c8faf-9486-4f68-8fde-058544f1315b\" (UID: \"318c8faf-9486-4f68-8fde-058544f1315b\") " Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.885148 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/318c8faf-9486-4f68-8fde-058544f1315b-dns-svc\") pod \"318c8faf-9486-4f68-8fde-058544f1315b\" (UID: \"318c8faf-9486-4f68-8fde-058544f1315b\") " Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.885169 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/318c8faf-9486-4f68-8fde-058544f1315b-config\") pod \"318c8faf-9486-4f68-8fde-058544f1315b\" (UID: \"318c8faf-9486-4f68-8fde-058544f1315b\") " Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.885358 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/318c8faf-9486-4f68-8fde-058544f1315b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "318c8faf-9486-4f68-8fde-058544f1315b" (UID: "318c8faf-9486-4f68-8fde-058544f1315b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.885560 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/318c8faf-9486-4f68-8fde-058544f1315b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "318c8faf-9486-4f68-8fde-058544f1315b" (UID: "318c8faf-9486-4f68-8fde-058544f1315b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.885792 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/318c8faf-9486-4f68-8fde-058544f1315b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.885806 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/318c8faf-9486-4f68-8fde-058544f1315b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.886151 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/318c8faf-9486-4f68-8fde-058544f1315b-config" (OuterVolumeSpecName: "config") pod "318c8faf-9486-4f68-8fde-058544f1315b" (UID: "318c8faf-9486-4f68-8fde-058544f1315b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.889272 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/318c8faf-9486-4f68-8fde-058544f1315b-kube-api-access-vmsdb" (OuterVolumeSpecName: "kube-api-access-vmsdb") pod "318c8faf-9486-4f68-8fde-058544f1315b" (UID: "318c8faf-9486-4f68-8fde-058544f1315b"). InnerVolumeSpecName "kube-api-access-vmsdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.960368 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-g5d9v"] Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.987770 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/318c8faf-9486-4f68-8fde-058544f1315b-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:05 crc kubenswrapper[4858]: I0202 17:30:05.987805 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmsdb\" (UniqueName: \"kubernetes.io/projected/318c8faf-9486-4f68-8fde-058544f1315b-kube-api-access-vmsdb\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:06 crc kubenswrapper[4858]: I0202 17:30:06.098445 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 02 17:30:06 crc kubenswrapper[4858]: W0202 17:30:06.103371 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c6b95f0_73a1_4b25_9905_2fa224e52142.slice/crio-1532d1775ca3a2e06f494c92ed72df9f50468507da62e445ae9b29d6ed8b17ed WatchSource:0}: Error finding container 1532d1775ca3a2e06f494c92ed72df9f50468507da62e445ae9b29d6ed8b17ed: Status 404 returned error can't find the container with id 1532d1775ca3a2e06f494c92ed72df9f50468507da62e445ae9b29d6ed8b17ed Feb 02 17:30:06 crc kubenswrapper[4858]: I0202 17:30:06.225363 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-lngkw"] Feb 02 17:30:06 crc kubenswrapper[4858]: W0202 17:30:06.231824 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcdbcb8ee_7153_422f_9d1a_05e50dae3abd.slice/crio-566e5b4051061caedb24fdbf07eb3dfa71e215b71c54aa3be73f1349a6d509cb WatchSource:0}: Error finding container 566e5b4051061caedb24fdbf07eb3dfa71e215b71c54aa3be73f1349a6d509cb: Status 404 returned error can't find the container with id 566e5b4051061caedb24fdbf07eb3dfa71e215b71c54aa3be73f1349a6d509cb Feb 02 17:30:06 crc kubenswrapper[4858]: I0202 17:30:06.817621 4858 generic.go:334] "Generic (PLEG): container finished" podID="cdbcb8ee-7153-422f-9d1a-05e50dae3abd" containerID="35be59ffe54209d15a5cfd3adb43a453a46e5fc6771372266735992f6d24fa15" exitCode=0 Feb 02 17:30:06 crc kubenswrapper[4858]: I0202 17:30:06.817854 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-lngkw" event={"ID":"cdbcb8ee-7153-422f-9d1a-05e50dae3abd","Type":"ContainerDied","Data":"35be59ffe54209d15a5cfd3adb43a453a46e5fc6771372266735992f6d24fa15"} Feb 02 17:30:06 crc kubenswrapper[4858]: I0202 17:30:06.818784 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-lngkw" event={"ID":"cdbcb8ee-7153-422f-9d1a-05e50dae3abd","Type":"ContainerStarted","Data":"566e5b4051061caedb24fdbf07eb3dfa71e215b71c54aa3be73f1349a6d509cb"} Feb 02 17:30:06 crc kubenswrapper[4858]: I0202 17:30:06.825370 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-g5d9v" event={"ID":"de17af80-1849-4a19-ae89-50057bc76aa3","Type":"ContainerStarted","Data":"a986fadf39304ba8baf2ed5a8224be466b3effc83d33454aef49777e9c2a4e66"} Feb 02 17:30:06 crc kubenswrapper[4858]: I0202 17:30:06.825420 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-g5d9v" event={"ID":"de17af80-1849-4a19-ae89-50057bc76aa3","Type":"ContainerStarted","Data":"14baed7631dad71f46af2765215b22758db06be60ab4d2b3e044b192eaa666bc"} Feb 02 17:30:06 crc kubenswrapper[4858]: I0202 17:30:06.838650 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-cqzdh" Feb 02 17:30:06 crc kubenswrapper[4858]: I0202 17:30:06.838717 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3c6b95f0-73a1-4b25-9905-2fa224e52142","Type":"ContainerStarted","Data":"1532d1775ca3a2e06f494c92ed72df9f50468507da62e445ae9b29d6ed8b17ed"} Feb 02 17:30:06 crc kubenswrapper[4858]: I0202 17:30:06.886361 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-g5d9v" podStartSLOduration=1.886337436 podStartE2EDuration="1.886337436s" podCreationTimestamp="2026-02-02 17:30:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:30:06.873658749 +0000 UTC m=+908.026074014" watchObservedRunningTime="2026-02-02 17:30:06.886337436 +0000 UTC m=+908.038752711" Feb 02 17:30:06 crc kubenswrapper[4858]: I0202 17:30:06.979571 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-cqzdh"] Feb 02 17:30:06 crc kubenswrapper[4858]: I0202 17:30:06.988624 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-cqzdh"] Feb 02 17:30:07 crc kubenswrapper[4858]: I0202 17:30:07.849680 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-lngkw" event={"ID":"cdbcb8ee-7153-422f-9d1a-05e50dae3abd","Type":"ContainerStarted","Data":"fce03b4f1413ee836e9cca3010980f1e53fffedc2068457397fc5a79d1a6a0eb"} Feb 02 17:30:07 crc kubenswrapper[4858]: I0202 17:30:07.850064 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-lngkw" Feb 02 17:30:07 crc kubenswrapper[4858]: I0202 17:30:07.854739 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3c6b95f0-73a1-4b25-9905-2fa224e52142","Type":"ContainerStarted","Data":"e704475dfe3d765529ee398e5ec7b1694eb2e3995628fc95abeebe94ebde6993"} Feb 02 17:30:07 crc kubenswrapper[4858]: I0202 17:30:07.873939 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-lngkw" podStartSLOduration=2.873917809 podStartE2EDuration="2.873917809s" podCreationTimestamp="2026-02-02 17:30:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:30:07.872081398 +0000 UTC m=+909.024496663" watchObservedRunningTime="2026-02-02 17:30:07.873917809 +0000 UTC m=+909.026333084" Feb 02 17:30:08 crc kubenswrapper[4858]: I0202 17:30:08.271697 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 02 17:30:08 crc kubenswrapper[4858]: I0202 17:30:08.271789 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 02 17:30:08 crc kubenswrapper[4858]: I0202 17:30:08.418468 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="318c8faf-9486-4f68-8fde-058544f1315b" path="/var/lib/kubelet/pods/318c8faf-9486-4f68-8fde-058544f1315b/volumes" Feb 02 17:30:08 crc kubenswrapper[4858]: I0202 17:30:08.632525 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 02 17:30:08 crc kubenswrapper[4858]: I0202 17:30:08.868784 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3c6b95f0-73a1-4b25-9905-2fa224e52142","Type":"ContainerStarted","Data":"12086ceec4bc49b9353ce8b630921ddc8c46a6e2dce4b018ec6cdf721aa88b03"} Feb 02 17:30:08 crc kubenswrapper[4858]: I0202 17:30:08.895085 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.586605059 podStartE2EDuration="3.895065528s" podCreationTimestamp="2026-02-02 17:30:05 +0000 UTC" firstStartedPulling="2026-02-02 17:30:06.105667677 +0000 UTC m=+907.258082942" lastFinishedPulling="2026-02-02 17:30:07.414128146 +0000 UTC m=+908.566543411" observedRunningTime="2026-02-02 17:30:08.891275431 +0000 UTC m=+910.043690706" watchObservedRunningTime="2026-02-02 17:30:08.895065528 +0000 UTC m=+910.047480793" Feb 02 17:30:08 crc kubenswrapper[4858]: I0202 17:30:08.967505 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 02 17:30:09 crc kubenswrapper[4858]: I0202 17:30:09.450609 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 02 17:30:09 crc kubenswrapper[4858]: I0202 17:30:09.451292 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 02 17:30:09 crc kubenswrapper[4858]: I0202 17:30:09.876678 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.082710 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-b266-account-create-update-zmm9r"] Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.083723 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b266-account-create-update-zmm9r" Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.086969 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.106364 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b266-account-create-update-zmm9r"] Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.136710 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-xf4m2"] Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.138163 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-xf4m2" Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.144111 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-xf4m2"] Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.201100 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0e82b54-f2bd-4307-bd7a-f613c0dac23c-operator-scripts\") pod \"keystone-db-create-xf4m2\" (UID: \"e0e82b54-f2bd-4307-bd7a-f613c0dac23c\") " pod="openstack/keystone-db-create-xf4m2" Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.201168 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrzzt\" (UniqueName: \"kubernetes.io/projected/e0e82b54-f2bd-4307-bd7a-f613c0dac23c-kube-api-access-lrzzt\") pod \"keystone-db-create-xf4m2\" (UID: \"e0e82b54-f2bd-4307-bd7a-f613c0dac23c\") " pod="openstack/keystone-db-create-xf4m2" Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.201373 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7ljx\" (UniqueName: \"kubernetes.io/projected/431fd5ca-da9e-4493-acf7-670eb92cf3aa-kube-api-access-c7ljx\") pod \"keystone-b266-account-create-update-zmm9r\" (UID: \"431fd5ca-da9e-4493-acf7-670eb92cf3aa\") " pod="openstack/keystone-b266-account-create-update-zmm9r" Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.201535 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/431fd5ca-da9e-4493-acf7-670eb92cf3aa-operator-scripts\") pod \"keystone-b266-account-create-update-zmm9r\" (UID: \"431fd5ca-da9e-4493-acf7-670eb92cf3aa\") " pod="openstack/keystone-b266-account-create-update-zmm9r" Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.287838 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-ddwfk"] Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.289201 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-ddwfk" Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.296459 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-ddwfk"] Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.303075 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0e82b54-f2bd-4307-bd7a-f613c0dac23c-operator-scripts\") pod \"keystone-db-create-xf4m2\" (UID: \"e0e82b54-f2bd-4307-bd7a-f613c0dac23c\") " pod="openstack/keystone-db-create-xf4m2" Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.303153 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrzzt\" (UniqueName: \"kubernetes.io/projected/e0e82b54-f2bd-4307-bd7a-f613c0dac23c-kube-api-access-lrzzt\") pod \"keystone-db-create-xf4m2\" (UID: \"e0e82b54-f2bd-4307-bd7a-f613c0dac23c\") " pod="openstack/keystone-db-create-xf4m2" Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.303225 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7ljx\" (UniqueName: \"kubernetes.io/projected/431fd5ca-da9e-4493-acf7-670eb92cf3aa-kube-api-access-c7ljx\") pod \"keystone-b266-account-create-update-zmm9r\" (UID: \"431fd5ca-da9e-4493-acf7-670eb92cf3aa\") " pod="openstack/keystone-b266-account-create-update-zmm9r" Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.303297 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/431fd5ca-da9e-4493-acf7-670eb92cf3aa-operator-scripts\") pod \"keystone-b266-account-create-update-zmm9r\" (UID: \"431fd5ca-da9e-4493-acf7-670eb92cf3aa\") " pod="openstack/keystone-b266-account-create-update-zmm9r" Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.304118 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/431fd5ca-da9e-4493-acf7-670eb92cf3aa-operator-scripts\") pod \"keystone-b266-account-create-update-zmm9r\" (UID: \"431fd5ca-da9e-4493-acf7-670eb92cf3aa\") " pod="openstack/keystone-b266-account-create-update-zmm9r" Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.304707 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0e82b54-f2bd-4307-bd7a-f613c0dac23c-operator-scripts\") pod \"keystone-db-create-xf4m2\" (UID: \"e0e82b54-f2bd-4307-bd7a-f613c0dac23c\") " pod="openstack/keystone-db-create-xf4m2" Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.306985 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-e76f-account-create-update-ng2zq"] Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.308420 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e76f-account-create-update-ng2zq" Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.316256 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-e76f-account-create-update-ng2zq"] Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.319360 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.332609 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrzzt\" (UniqueName: \"kubernetes.io/projected/e0e82b54-f2bd-4307-bd7a-f613c0dac23c-kube-api-access-lrzzt\") pod \"keystone-db-create-xf4m2\" (UID: \"e0e82b54-f2bd-4307-bd7a-f613c0dac23c\") " pod="openstack/keystone-db-create-xf4m2" Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.348751 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7ljx\" (UniqueName: \"kubernetes.io/projected/431fd5ca-da9e-4493-acf7-670eb92cf3aa-kube-api-access-c7ljx\") pod \"keystone-b266-account-create-update-zmm9r\" (UID: \"431fd5ca-da9e-4493-acf7-670eb92cf3aa\") " pod="openstack/keystone-b266-account-create-update-zmm9r" Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.404685 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwq4k\" (UniqueName: \"kubernetes.io/projected/757ea041-5d2d-4b24-9f27-ca5ee8116763-kube-api-access-bwq4k\") pod \"placement-e76f-account-create-update-ng2zq\" (UID: \"757ea041-5d2d-4b24-9f27-ca5ee8116763\") " pod="openstack/placement-e76f-account-create-update-ng2zq" Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.404857 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17d7410b-4b1f-4a80-ab62-adf84d324b21-operator-scripts\") pod \"placement-db-create-ddwfk\" (UID: \"17d7410b-4b1f-4a80-ab62-adf84d324b21\") " pod="openstack/placement-db-create-ddwfk" Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.404952 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkjv6\" (UniqueName: \"kubernetes.io/projected/17d7410b-4b1f-4a80-ab62-adf84d324b21-kube-api-access-jkjv6\") pod \"placement-db-create-ddwfk\" (UID: \"17d7410b-4b1f-4a80-ab62-adf84d324b21\") " pod="openstack/placement-db-create-ddwfk" Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.405040 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/757ea041-5d2d-4b24-9f27-ca5ee8116763-operator-scripts\") pod \"placement-e76f-account-create-update-ng2zq\" (UID: \"757ea041-5d2d-4b24-9f27-ca5ee8116763\") " pod="openstack/placement-e76f-account-create-update-ng2zq" Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.407366 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b266-account-create-update-zmm9r" Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.458253 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-xf4m2" Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.507140 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwq4k\" (UniqueName: \"kubernetes.io/projected/757ea041-5d2d-4b24-9f27-ca5ee8116763-kube-api-access-bwq4k\") pod \"placement-e76f-account-create-update-ng2zq\" (UID: \"757ea041-5d2d-4b24-9f27-ca5ee8116763\") " pod="openstack/placement-e76f-account-create-update-ng2zq" Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.507370 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17d7410b-4b1f-4a80-ab62-adf84d324b21-operator-scripts\") pod \"placement-db-create-ddwfk\" (UID: \"17d7410b-4b1f-4a80-ab62-adf84d324b21\") " pod="openstack/placement-db-create-ddwfk" Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.507411 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkjv6\" (UniqueName: \"kubernetes.io/projected/17d7410b-4b1f-4a80-ab62-adf84d324b21-kube-api-access-jkjv6\") pod \"placement-db-create-ddwfk\" (UID: \"17d7410b-4b1f-4a80-ab62-adf84d324b21\") " pod="openstack/placement-db-create-ddwfk" Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.507437 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/757ea041-5d2d-4b24-9f27-ca5ee8116763-operator-scripts\") pod \"placement-e76f-account-create-update-ng2zq\" (UID: \"757ea041-5d2d-4b24-9f27-ca5ee8116763\") " pod="openstack/placement-e76f-account-create-update-ng2zq" Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.508235 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17d7410b-4b1f-4a80-ab62-adf84d324b21-operator-scripts\") pod \"placement-db-create-ddwfk\" (UID: \"17d7410b-4b1f-4a80-ab62-adf84d324b21\") " pod="openstack/placement-db-create-ddwfk" Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.509825 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/757ea041-5d2d-4b24-9f27-ca5ee8116763-operator-scripts\") pod \"placement-e76f-account-create-update-ng2zq\" (UID: \"757ea041-5d2d-4b24-9f27-ca5ee8116763\") " pod="openstack/placement-e76f-account-create-update-ng2zq" Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.524876 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkjv6\" (UniqueName: \"kubernetes.io/projected/17d7410b-4b1f-4a80-ab62-adf84d324b21-kube-api-access-jkjv6\") pod \"placement-db-create-ddwfk\" (UID: \"17d7410b-4b1f-4a80-ab62-adf84d324b21\") " pod="openstack/placement-db-create-ddwfk" Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.525318 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwq4k\" (UniqueName: \"kubernetes.io/projected/757ea041-5d2d-4b24-9f27-ca5ee8116763-kube-api-access-bwq4k\") pod \"placement-e76f-account-create-update-ng2zq\" (UID: \"757ea041-5d2d-4b24-9f27-ca5ee8116763\") " pod="openstack/placement-e76f-account-create-update-ng2zq" Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.606596 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-ddwfk" Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.631611 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e76f-account-create-update-ng2zq" Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.849202 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b266-account-create-update-zmm9r"] Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.903905 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b266-account-create-update-zmm9r" event={"ID":"431fd5ca-da9e-4493-acf7-670eb92cf3aa","Type":"ContainerStarted","Data":"7a83d87fb0f231ead300caf2d310b555b07a2e5af27d6f9dd87312b717e23ebe"} Feb 02 17:30:11 crc kubenswrapper[4858]: I0202 17:30:11.942293 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-xf4m2"] Feb 02 17:30:11 crc kubenswrapper[4858]: W0202 17:30:11.946350 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0e82b54_f2bd_4307_bd7a_f613c0dac23c.slice/crio-09497d4f12803acb834a8be063a8e8d7e9b56c851a64ffaaf8a973c645247352 WatchSource:0}: Error finding container 09497d4f12803acb834a8be063a8e8d7e9b56c851a64ffaaf8a973c645247352: Status 404 returned error can't find the container with id 09497d4f12803acb834a8be063a8e8d7e9b56c851a64ffaaf8a973c645247352 Feb 02 17:30:12 crc kubenswrapper[4858]: I0202 17:30:12.068045 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-ddwfk"] Feb 02 17:30:12 crc kubenswrapper[4858]: I0202 17:30:12.128332 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-e76f-account-create-update-ng2zq"] Feb 02 17:30:12 crc kubenswrapper[4858]: W0202 17:30:12.131842 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod757ea041_5d2d_4b24_9f27_ca5ee8116763.slice/crio-1f1e00b765611cf7924c3d42f799490b54cf2d95d9a0e4a18ad9e852301d76b8 WatchSource:0}: Error finding container 1f1e00b765611cf7924c3d42f799490b54cf2d95d9a0e4a18ad9e852301d76b8: Status 404 returned error can't find the container with id 1f1e00b765611cf7924c3d42f799490b54cf2d95d9a0e4a18ad9e852301d76b8 Feb 02 17:30:12 crc kubenswrapper[4858]: I0202 17:30:12.354069 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 02 17:30:12 crc kubenswrapper[4858]: I0202 17:30:12.428789 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-lngkw"] Feb 02 17:30:12 crc kubenswrapper[4858]: I0202 17:30:12.429086 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-lngkw" podUID="cdbcb8ee-7153-422f-9d1a-05e50dae3abd" containerName="dnsmasq-dns" containerID="cri-o://fce03b4f1413ee836e9cca3010980f1e53fffedc2068457397fc5a79d1a6a0eb" gracePeriod=10 Feb 02 17:30:12 crc kubenswrapper[4858]: I0202 17:30:12.434125 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-lngkw" Feb 02 17:30:12 crc kubenswrapper[4858]: I0202 17:30:12.448666 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-6qdtt"] Feb 02 17:30:12 crc kubenswrapper[4858]: I0202 17:30:12.450799 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-6qdtt" Feb 02 17:30:12 crc kubenswrapper[4858]: I0202 17:30:12.462587 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-6qdtt"] Feb 02 17:30:12 crc kubenswrapper[4858]: I0202 17:30:12.520668 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5350e65d-0f27-4ac0-9251-00ce22348491-dns-svc\") pod \"dnsmasq-dns-698758b865-6qdtt\" (UID: \"5350e65d-0f27-4ac0-9251-00ce22348491\") " pod="openstack/dnsmasq-dns-698758b865-6qdtt" Feb 02 17:30:12 crc kubenswrapper[4858]: I0202 17:30:12.520759 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5350e65d-0f27-4ac0-9251-00ce22348491-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-6qdtt\" (UID: \"5350e65d-0f27-4ac0-9251-00ce22348491\") " pod="openstack/dnsmasq-dns-698758b865-6qdtt" Feb 02 17:30:12 crc kubenswrapper[4858]: I0202 17:30:12.520783 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5350e65d-0f27-4ac0-9251-00ce22348491-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-6qdtt\" (UID: \"5350e65d-0f27-4ac0-9251-00ce22348491\") " pod="openstack/dnsmasq-dns-698758b865-6qdtt" Feb 02 17:30:12 crc kubenswrapper[4858]: I0202 17:30:12.520830 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5350e65d-0f27-4ac0-9251-00ce22348491-config\") pod \"dnsmasq-dns-698758b865-6qdtt\" (UID: \"5350e65d-0f27-4ac0-9251-00ce22348491\") " pod="openstack/dnsmasq-dns-698758b865-6qdtt" Feb 02 17:30:12 crc kubenswrapper[4858]: I0202 17:30:12.520863 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wprjb\" (UniqueName: \"kubernetes.io/projected/5350e65d-0f27-4ac0-9251-00ce22348491-kube-api-access-wprjb\") pod \"dnsmasq-dns-698758b865-6qdtt\" (UID: \"5350e65d-0f27-4ac0-9251-00ce22348491\") " pod="openstack/dnsmasq-dns-698758b865-6qdtt" Feb 02 17:30:12 crc kubenswrapper[4858]: I0202 17:30:12.622469 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5350e65d-0f27-4ac0-9251-00ce22348491-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-6qdtt\" (UID: \"5350e65d-0f27-4ac0-9251-00ce22348491\") " pod="openstack/dnsmasq-dns-698758b865-6qdtt" Feb 02 17:30:12 crc kubenswrapper[4858]: I0202 17:30:12.622832 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5350e65d-0f27-4ac0-9251-00ce22348491-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-6qdtt\" (UID: \"5350e65d-0f27-4ac0-9251-00ce22348491\") " pod="openstack/dnsmasq-dns-698758b865-6qdtt" Feb 02 17:30:12 crc kubenswrapper[4858]: I0202 17:30:12.622954 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5350e65d-0f27-4ac0-9251-00ce22348491-config\") pod \"dnsmasq-dns-698758b865-6qdtt\" (UID: \"5350e65d-0f27-4ac0-9251-00ce22348491\") " pod="openstack/dnsmasq-dns-698758b865-6qdtt" Feb 02 17:30:12 crc kubenswrapper[4858]: I0202 17:30:12.623036 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wprjb\" (UniqueName: \"kubernetes.io/projected/5350e65d-0f27-4ac0-9251-00ce22348491-kube-api-access-wprjb\") pod \"dnsmasq-dns-698758b865-6qdtt\" (UID: \"5350e65d-0f27-4ac0-9251-00ce22348491\") " pod="openstack/dnsmasq-dns-698758b865-6qdtt" Feb 02 17:30:12 crc kubenswrapper[4858]: I0202 17:30:12.623166 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5350e65d-0f27-4ac0-9251-00ce22348491-dns-svc\") pod \"dnsmasq-dns-698758b865-6qdtt\" (UID: \"5350e65d-0f27-4ac0-9251-00ce22348491\") " pod="openstack/dnsmasq-dns-698758b865-6qdtt" Feb 02 17:30:12 crc kubenswrapper[4858]: I0202 17:30:12.623624 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5350e65d-0f27-4ac0-9251-00ce22348491-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-6qdtt\" (UID: \"5350e65d-0f27-4ac0-9251-00ce22348491\") " pod="openstack/dnsmasq-dns-698758b865-6qdtt" Feb 02 17:30:12 crc kubenswrapper[4858]: I0202 17:30:12.624190 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5350e65d-0f27-4ac0-9251-00ce22348491-config\") pod \"dnsmasq-dns-698758b865-6qdtt\" (UID: \"5350e65d-0f27-4ac0-9251-00ce22348491\") " pod="openstack/dnsmasq-dns-698758b865-6qdtt" Feb 02 17:30:12 crc kubenswrapper[4858]: I0202 17:30:12.624416 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5350e65d-0f27-4ac0-9251-00ce22348491-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-6qdtt\" (UID: \"5350e65d-0f27-4ac0-9251-00ce22348491\") " pod="openstack/dnsmasq-dns-698758b865-6qdtt" Feb 02 17:30:12 crc kubenswrapper[4858]: I0202 17:30:12.625136 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5350e65d-0f27-4ac0-9251-00ce22348491-dns-svc\") pod \"dnsmasq-dns-698758b865-6qdtt\" (UID: \"5350e65d-0f27-4ac0-9251-00ce22348491\") " pod="openstack/dnsmasq-dns-698758b865-6qdtt" Feb 02 17:30:12 crc kubenswrapper[4858]: I0202 17:30:12.665372 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wprjb\" (UniqueName: \"kubernetes.io/projected/5350e65d-0f27-4ac0-9251-00ce22348491-kube-api-access-wprjb\") pod \"dnsmasq-dns-698758b865-6qdtt\" (UID: \"5350e65d-0f27-4ac0-9251-00ce22348491\") " pod="openstack/dnsmasq-dns-698758b865-6qdtt" Feb 02 17:30:12 crc kubenswrapper[4858]: I0202 17:30:12.775205 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-6qdtt" Feb 02 17:30:12 crc kubenswrapper[4858]: I0202 17:30:12.787786 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 02 17:30:12 crc kubenswrapper[4858]: I0202 17:30:12.875400 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 02 17:30:12 crc kubenswrapper[4858]: I0202 17:30:12.913705 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-ddwfk" event={"ID":"17d7410b-4b1f-4a80-ab62-adf84d324b21","Type":"ContainerStarted","Data":"573a6d043e81d0f4df938492d51f1f55cbbbd843e49553867044e5a8dbec3120"} Feb 02 17:30:12 crc kubenswrapper[4858]: I0202 17:30:12.915100 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-xf4m2" event={"ID":"e0e82b54-f2bd-4307-bd7a-f613c0dac23c","Type":"ContainerStarted","Data":"09497d4f12803acb834a8be063a8e8d7e9b56c851a64ffaaf8a973c645247352"} Feb 02 17:30:12 crc kubenswrapper[4858]: I0202 17:30:12.918191 4858 generic.go:334] "Generic (PLEG): container finished" podID="cdbcb8ee-7153-422f-9d1a-05e50dae3abd" containerID="fce03b4f1413ee836e9cca3010980f1e53fffedc2068457397fc5a79d1a6a0eb" exitCode=0 Feb 02 17:30:12 crc kubenswrapper[4858]: I0202 17:30:12.918265 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-lngkw" event={"ID":"cdbcb8ee-7153-422f-9d1a-05e50dae3abd","Type":"ContainerDied","Data":"fce03b4f1413ee836e9cca3010980f1e53fffedc2068457397fc5a79d1a6a0eb"} Feb 02 17:30:12 crc kubenswrapper[4858]: I0202 17:30:12.920623 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e76f-account-create-update-ng2zq" event={"ID":"757ea041-5d2d-4b24-9f27-ca5ee8116763","Type":"ContainerStarted","Data":"1f1e00b765611cf7924c3d42f799490b54cf2d95d9a0e4a18ad9e852301d76b8"} Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.259418 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-6qdtt"] Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.548795 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.558001 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.561813 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.562539 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.562578 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-ttpf4" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.562605 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.581777 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.645093 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzf9n\" (UniqueName: \"kubernetes.io/projected/703d6256-20d4-45fc-9a4c-ec6970ea250d-kube-api-access-vzf9n\") pod \"swift-storage-0\" (UID: \"703d6256-20d4-45fc-9a4c-ec6970ea250d\") " pod="openstack/swift-storage-0" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.645161 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/703d6256-20d4-45fc-9a4c-ec6970ea250d-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"703d6256-20d4-45fc-9a4c-ec6970ea250d\") " pod="openstack/swift-storage-0" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.645326 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/703d6256-20d4-45fc-9a4c-ec6970ea250d-lock\") pod \"swift-storage-0\" (UID: \"703d6256-20d4-45fc-9a4c-ec6970ea250d\") " pod="openstack/swift-storage-0" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.645373 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/703d6256-20d4-45fc-9a4c-ec6970ea250d-etc-swift\") pod \"swift-storage-0\" (UID: \"703d6256-20d4-45fc-9a4c-ec6970ea250d\") " pod="openstack/swift-storage-0" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.645528 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"703d6256-20d4-45fc-9a4c-ec6970ea250d\") " pod="openstack/swift-storage-0" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.645647 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/703d6256-20d4-45fc-9a4c-ec6970ea250d-cache\") pod \"swift-storage-0\" (UID: \"703d6256-20d4-45fc-9a4c-ec6970ea250d\") " pod="openstack/swift-storage-0" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.747220 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/703d6256-20d4-45fc-9a4c-ec6970ea250d-lock\") pod \"swift-storage-0\" (UID: \"703d6256-20d4-45fc-9a4c-ec6970ea250d\") " pod="openstack/swift-storage-0" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.747277 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/703d6256-20d4-45fc-9a4c-ec6970ea250d-etc-swift\") pod \"swift-storage-0\" (UID: \"703d6256-20d4-45fc-9a4c-ec6970ea250d\") " pod="openstack/swift-storage-0" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.747318 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"703d6256-20d4-45fc-9a4c-ec6970ea250d\") " pod="openstack/swift-storage-0" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.747395 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/703d6256-20d4-45fc-9a4c-ec6970ea250d-cache\") pod \"swift-storage-0\" (UID: \"703d6256-20d4-45fc-9a4c-ec6970ea250d\") " pod="openstack/swift-storage-0" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.747450 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzf9n\" (UniqueName: \"kubernetes.io/projected/703d6256-20d4-45fc-9a4c-ec6970ea250d-kube-api-access-vzf9n\") pod \"swift-storage-0\" (UID: \"703d6256-20d4-45fc-9a4c-ec6970ea250d\") " pod="openstack/swift-storage-0" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.747482 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/703d6256-20d4-45fc-9a4c-ec6970ea250d-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"703d6256-20d4-45fc-9a4c-ec6970ea250d\") " pod="openstack/swift-storage-0" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.748327 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"703d6256-20d4-45fc-9a4c-ec6970ea250d\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/swift-storage-0" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.750145 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/703d6256-20d4-45fc-9a4c-ec6970ea250d-lock\") pod \"swift-storage-0\" (UID: \"703d6256-20d4-45fc-9a4c-ec6970ea250d\") " pod="openstack/swift-storage-0" Feb 02 17:30:13 crc kubenswrapper[4858]: E0202 17:30:13.750439 4858 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 02 17:30:13 crc kubenswrapper[4858]: E0202 17:30:13.750573 4858 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 02 17:30:13 crc kubenswrapper[4858]: E0202 17:30:13.750741 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/703d6256-20d4-45fc-9a4c-ec6970ea250d-etc-swift podName:703d6256-20d4-45fc-9a4c-ec6970ea250d nodeName:}" failed. No retries permitted until 2026-02-02 17:30:14.250714492 +0000 UTC m=+915.403129777 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/703d6256-20d4-45fc-9a4c-ec6970ea250d-etc-swift") pod "swift-storage-0" (UID: "703d6256-20d4-45fc-9a4c-ec6970ea250d") : configmap "swift-ring-files" not found Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.754043 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/703d6256-20d4-45fc-9a4c-ec6970ea250d-cache\") pod \"swift-storage-0\" (UID: \"703d6256-20d4-45fc-9a4c-ec6970ea250d\") " pod="openstack/swift-storage-0" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.760318 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/703d6256-20d4-45fc-9a4c-ec6970ea250d-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"703d6256-20d4-45fc-9a4c-ec6970ea250d\") " pod="openstack/swift-storage-0" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.776836 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzf9n\" (UniqueName: \"kubernetes.io/projected/703d6256-20d4-45fc-9a4c-ec6970ea250d-kube-api-access-vzf9n\") pod \"swift-storage-0\" (UID: \"703d6256-20d4-45fc-9a4c-ec6970ea250d\") " pod="openstack/swift-storage-0" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.787391 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-44qfs"] Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.788428 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-44qfs" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.790255 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.792307 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.792937 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.795407 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"703d6256-20d4-45fc-9a4c-ec6970ea250d\") " pod="openstack/swift-storage-0" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.799056 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-44qfs"] Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.849108 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bf16bc74-b9cb-4774-b646-a4de84eb4dd9-dispersionconf\") pod \"swift-ring-rebalance-44qfs\" (UID: \"bf16bc74-b9cb-4774-b646-a4de84eb4dd9\") " pod="openstack/swift-ring-rebalance-44qfs" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.849192 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpmm2\" (UniqueName: \"kubernetes.io/projected/bf16bc74-b9cb-4774-b646-a4de84eb4dd9-kube-api-access-tpmm2\") pod \"swift-ring-rebalance-44qfs\" (UID: \"bf16bc74-b9cb-4774-b646-a4de84eb4dd9\") " pod="openstack/swift-ring-rebalance-44qfs" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.849382 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf16bc74-b9cb-4774-b646-a4de84eb4dd9-scripts\") pod \"swift-ring-rebalance-44qfs\" (UID: \"bf16bc74-b9cb-4774-b646-a4de84eb4dd9\") " pod="openstack/swift-ring-rebalance-44qfs" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.849429 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bf16bc74-b9cb-4774-b646-a4de84eb4dd9-ring-data-devices\") pod \"swift-ring-rebalance-44qfs\" (UID: \"bf16bc74-b9cb-4774-b646-a4de84eb4dd9\") " pod="openstack/swift-ring-rebalance-44qfs" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.849459 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf16bc74-b9cb-4774-b646-a4de84eb4dd9-combined-ca-bundle\") pod \"swift-ring-rebalance-44qfs\" (UID: \"bf16bc74-b9cb-4774-b646-a4de84eb4dd9\") " pod="openstack/swift-ring-rebalance-44qfs" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.849570 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bf16bc74-b9cb-4774-b646-a4de84eb4dd9-etc-swift\") pod \"swift-ring-rebalance-44qfs\" (UID: \"bf16bc74-b9cb-4774-b646-a4de84eb4dd9\") " pod="openstack/swift-ring-rebalance-44qfs" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.849592 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bf16bc74-b9cb-4774-b646-a4de84eb4dd9-swiftconf\") pod \"swift-ring-rebalance-44qfs\" (UID: \"bf16bc74-b9cb-4774-b646-a4de84eb4dd9\") " pod="openstack/swift-ring-rebalance-44qfs" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.928063 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-6qdtt" event={"ID":"5350e65d-0f27-4ac0-9251-00ce22348491","Type":"ContainerStarted","Data":"fa78dd79bc7bb7c7948ab86ea162866a718f4f05cd03d0ceeaac8a678685ca04"} Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.951294 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bf16bc74-b9cb-4774-b646-a4de84eb4dd9-dispersionconf\") pod \"swift-ring-rebalance-44qfs\" (UID: \"bf16bc74-b9cb-4774-b646-a4de84eb4dd9\") " pod="openstack/swift-ring-rebalance-44qfs" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.951366 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpmm2\" (UniqueName: \"kubernetes.io/projected/bf16bc74-b9cb-4774-b646-a4de84eb4dd9-kube-api-access-tpmm2\") pod \"swift-ring-rebalance-44qfs\" (UID: \"bf16bc74-b9cb-4774-b646-a4de84eb4dd9\") " pod="openstack/swift-ring-rebalance-44qfs" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.951424 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf16bc74-b9cb-4774-b646-a4de84eb4dd9-scripts\") pod \"swift-ring-rebalance-44qfs\" (UID: \"bf16bc74-b9cb-4774-b646-a4de84eb4dd9\") " pod="openstack/swift-ring-rebalance-44qfs" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.951441 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bf16bc74-b9cb-4774-b646-a4de84eb4dd9-ring-data-devices\") pod \"swift-ring-rebalance-44qfs\" (UID: \"bf16bc74-b9cb-4774-b646-a4de84eb4dd9\") " pod="openstack/swift-ring-rebalance-44qfs" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.951457 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf16bc74-b9cb-4774-b646-a4de84eb4dd9-combined-ca-bundle\") pod \"swift-ring-rebalance-44qfs\" (UID: \"bf16bc74-b9cb-4774-b646-a4de84eb4dd9\") " pod="openstack/swift-ring-rebalance-44qfs" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.951491 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bf16bc74-b9cb-4774-b646-a4de84eb4dd9-etc-swift\") pod \"swift-ring-rebalance-44qfs\" (UID: \"bf16bc74-b9cb-4774-b646-a4de84eb4dd9\") " pod="openstack/swift-ring-rebalance-44qfs" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.951508 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bf16bc74-b9cb-4774-b646-a4de84eb4dd9-swiftconf\") pod \"swift-ring-rebalance-44qfs\" (UID: \"bf16bc74-b9cb-4774-b646-a4de84eb4dd9\") " pod="openstack/swift-ring-rebalance-44qfs" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.952468 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bf16bc74-b9cb-4774-b646-a4de84eb4dd9-etc-swift\") pod \"swift-ring-rebalance-44qfs\" (UID: \"bf16bc74-b9cb-4774-b646-a4de84eb4dd9\") " pod="openstack/swift-ring-rebalance-44qfs" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.952669 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf16bc74-b9cb-4774-b646-a4de84eb4dd9-scripts\") pod \"swift-ring-rebalance-44qfs\" (UID: \"bf16bc74-b9cb-4774-b646-a4de84eb4dd9\") " pod="openstack/swift-ring-rebalance-44qfs" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.952821 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bf16bc74-b9cb-4774-b646-a4de84eb4dd9-ring-data-devices\") pod \"swift-ring-rebalance-44qfs\" (UID: \"bf16bc74-b9cb-4774-b646-a4de84eb4dd9\") " pod="openstack/swift-ring-rebalance-44qfs" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.957358 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bf16bc74-b9cb-4774-b646-a4de84eb4dd9-swiftconf\") pod \"swift-ring-rebalance-44qfs\" (UID: \"bf16bc74-b9cb-4774-b646-a4de84eb4dd9\") " pod="openstack/swift-ring-rebalance-44qfs" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.957495 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf16bc74-b9cb-4774-b646-a4de84eb4dd9-combined-ca-bundle\") pod \"swift-ring-rebalance-44qfs\" (UID: \"bf16bc74-b9cb-4774-b646-a4de84eb4dd9\") " pod="openstack/swift-ring-rebalance-44qfs" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.957548 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bf16bc74-b9cb-4774-b646-a4de84eb4dd9-dispersionconf\") pod \"swift-ring-rebalance-44qfs\" (UID: \"bf16bc74-b9cb-4774-b646-a4de84eb4dd9\") " pod="openstack/swift-ring-rebalance-44qfs" Feb 02 17:30:13 crc kubenswrapper[4858]: I0202 17:30:13.966675 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpmm2\" (UniqueName: \"kubernetes.io/projected/bf16bc74-b9cb-4774-b646-a4de84eb4dd9-kube-api-access-tpmm2\") pod \"swift-ring-rebalance-44qfs\" (UID: \"bf16bc74-b9cb-4774-b646-a4de84eb4dd9\") " pod="openstack/swift-ring-rebalance-44qfs" Feb 02 17:30:14 crc kubenswrapper[4858]: I0202 17:30:14.189391 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-44qfs" Feb 02 17:30:14 crc kubenswrapper[4858]: I0202 17:30:14.256443 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/703d6256-20d4-45fc-9a4c-ec6970ea250d-etc-swift\") pod \"swift-storage-0\" (UID: \"703d6256-20d4-45fc-9a4c-ec6970ea250d\") " pod="openstack/swift-storage-0" Feb 02 17:30:14 crc kubenswrapper[4858]: E0202 17:30:14.256873 4858 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 02 17:30:14 crc kubenswrapper[4858]: E0202 17:30:14.256894 4858 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 02 17:30:14 crc kubenswrapper[4858]: E0202 17:30:14.256958 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/703d6256-20d4-45fc-9a4c-ec6970ea250d-etc-swift podName:703d6256-20d4-45fc-9a4c-ec6970ea250d nodeName:}" failed. No retries permitted until 2026-02-02 17:30:15.256943584 +0000 UTC m=+916.409358849 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/703d6256-20d4-45fc-9a4c-ec6970ea250d-etc-swift") pod "swift-storage-0" (UID: "703d6256-20d4-45fc-9a4c-ec6970ea250d") : configmap "swift-ring-files" not found Feb 02 17:30:14 crc kubenswrapper[4858]: W0202 17:30:14.670212 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf16bc74_b9cb_4774_b646_a4de84eb4dd9.slice/crio-4443124c5bd466c60b396eaac3d110c6ade5b451ef2de30daad877554d668091 WatchSource:0}: Error finding container 4443124c5bd466c60b396eaac3d110c6ade5b451ef2de30daad877554d668091: Status 404 returned error can't find the container with id 4443124c5bd466c60b396eaac3d110c6ade5b451ef2de30daad877554d668091 Feb 02 17:30:14 crc kubenswrapper[4858]: I0202 17:30:14.672786 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-44qfs"] Feb 02 17:30:14 crc kubenswrapper[4858]: I0202 17:30:14.937851 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-44qfs" event={"ID":"bf16bc74-b9cb-4774-b646-a4de84eb4dd9","Type":"ContainerStarted","Data":"4443124c5bd466c60b396eaac3d110c6ade5b451ef2de30daad877554d668091"} Feb 02 17:30:15 crc kubenswrapper[4858]: I0202 17:30:15.161237 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-t8l7s"] Feb 02 17:30:15 crc kubenswrapper[4858]: I0202 17:30:15.162570 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-t8l7s" Feb 02 17:30:15 crc kubenswrapper[4858]: I0202 17:30:15.170294 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-t8l7s"] Feb 02 17:30:15 crc kubenswrapper[4858]: I0202 17:30:15.274714 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac79238c-c640-41cc-b4a7-774e06727bb0-operator-scripts\") pod \"glance-db-create-t8l7s\" (UID: \"ac79238c-c640-41cc-b4a7-774e06727bb0\") " pod="openstack/glance-db-create-t8l7s" Feb 02 17:30:15 crc kubenswrapper[4858]: I0202 17:30:15.274790 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkcxg\" (UniqueName: \"kubernetes.io/projected/ac79238c-c640-41cc-b4a7-774e06727bb0-kube-api-access-kkcxg\") pod \"glance-db-create-t8l7s\" (UID: \"ac79238c-c640-41cc-b4a7-774e06727bb0\") " pod="openstack/glance-db-create-t8l7s" Feb 02 17:30:15 crc kubenswrapper[4858]: I0202 17:30:15.275102 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/703d6256-20d4-45fc-9a4c-ec6970ea250d-etc-swift\") pod \"swift-storage-0\" (UID: \"703d6256-20d4-45fc-9a4c-ec6970ea250d\") " pod="openstack/swift-storage-0" Feb 02 17:30:15 crc kubenswrapper[4858]: E0202 17:30:15.275366 4858 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 02 17:30:15 crc kubenswrapper[4858]: E0202 17:30:15.275392 4858 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 02 17:30:15 crc kubenswrapper[4858]: E0202 17:30:15.275438 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/703d6256-20d4-45fc-9a4c-ec6970ea250d-etc-swift podName:703d6256-20d4-45fc-9a4c-ec6970ea250d nodeName:}" failed. No retries permitted until 2026-02-02 17:30:17.275422458 +0000 UTC m=+918.427837733 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/703d6256-20d4-45fc-9a4c-ec6970ea250d-etc-swift") pod "swift-storage-0" (UID: "703d6256-20d4-45fc-9a4c-ec6970ea250d") : configmap "swift-ring-files" not found Feb 02 17:30:15 crc kubenswrapper[4858]: I0202 17:30:15.327852 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-0edf-account-create-update-drkj7"] Feb 02 17:30:15 crc kubenswrapper[4858]: I0202 17:30:15.329706 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0edf-account-create-update-drkj7" Feb 02 17:30:15 crc kubenswrapper[4858]: I0202 17:30:15.332108 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 02 17:30:15 crc kubenswrapper[4858]: I0202 17:30:15.344837 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-0edf-account-create-update-drkj7"] Feb 02 17:30:15 crc kubenswrapper[4858]: I0202 17:30:15.377253 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8493a4cd-8a10-45f2-a063-4e0ce71de60f-operator-scripts\") pod \"glance-0edf-account-create-update-drkj7\" (UID: \"8493a4cd-8a10-45f2-a063-4e0ce71de60f\") " pod="openstack/glance-0edf-account-create-update-drkj7" Feb 02 17:30:15 crc kubenswrapper[4858]: I0202 17:30:15.377370 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac79238c-c640-41cc-b4a7-774e06727bb0-operator-scripts\") pod \"glance-db-create-t8l7s\" (UID: \"ac79238c-c640-41cc-b4a7-774e06727bb0\") " pod="openstack/glance-db-create-t8l7s" Feb 02 17:30:15 crc kubenswrapper[4858]: I0202 17:30:15.377451 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkcxg\" (UniqueName: \"kubernetes.io/projected/ac79238c-c640-41cc-b4a7-774e06727bb0-kube-api-access-kkcxg\") pod \"glance-db-create-t8l7s\" (UID: \"ac79238c-c640-41cc-b4a7-774e06727bb0\") " pod="openstack/glance-db-create-t8l7s" Feb 02 17:30:15 crc kubenswrapper[4858]: I0202 17:30:15.377527 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4lqx\" (UniqueName: \"kubernetes.io/projected/8493a4cd-8a10-45f2-a063-4e0ce71de60f-kube-api-access-s4lqx\") pod \"glance-0edf-account-create-update-drkj7\" (UID: \"8493a4cd-8a10-45f2-a063-4e0ce71de60f\") " pod="openstack/glance-0edf-account-create-update-drkj7" Feb 02 17:30:15 crc kubenswrapper[4858]: I0202 17:30:15.378564 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac79238c-c640-41cc-b4a7-774e06727bb0-operator-scripts\") pod \"glance-db-create-t8l7s\" (UID: \"ac79238c-c640-41cc-b4a7-774e06727bb0\") " pod="openstack/glance-db-create-t8l7s" Feb 02 17:30:15 crc kubenswrapper[4858]: I0202 17:30:15.400325 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkcxg\" (UniqueName: \"kubernetes.io/projected/ac79238c-c640-41cc-b4a7-774e06727bb0-kube-api-access-kkcxg\") pod \"glance-db-create-t8l7s\" (UID: \"ac79238c-c640-41cc-b4a7-774e06727bb0\") " pod="openstack/glance-db-create-t8l7s" Feb 02 17:30:15 crc kubenswrapper[4858]: I0202 17:30:15.479105 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4lqx\" (UniqueName: \"kubernetes.io/projected/8493a4cd-8a10-45f2-a063-4e0ce71de60f-kube-api-access-s4lqx\") pod \"glance-0edf-account-create-update-drkj7\" (UID: \"8493a4cd-8a10-45f2-a063-4e0ce71de60f\") " pod="openstack/glance-0edf-account-create-update-drkj7" Feb 02 17:30:15 crc kubenswrapper[4858]: I0202 17:30:15.479252 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8493a4cd-8a10-45f2-a063-4e0ce71de60f-operator-scripts\") pod \"glance-0edf-account-create-update-drkj7\" (UID: \"8493a4cd-8a10-45f2-a063-4e0ce71de60f\") " pod="openstack/glance-0edf-account-create-update-drkj7" Feb 02 17:30:15 crc kubenswrapper[4858]: I0202 17:30:15.480109 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8493a4cd-8a10-45f2-a063-4e0ce71de60f-operator-scripts\") pod \"glance-0edf-account-create-update-drkj7\" (UID: \"8493a4cd-8a10-45f2-a063-4e0ce71de60f\") " pod="openstack/glance-0edf-account-create-update-drkj7" Feb 02 17:30:15 crc kubenswrapper[4858]: I0202 17:30:15.481519 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-t8l7s" Feb 02 17:30:15 crc kubenswrapper[4858]: I0202 17:30:15.510283 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4lqx\" (UniqueName: \"kubernetes.io/projected/8493a4cd-8a10-45f2-a063-4e0ce71de60f-kube-api-access-s4lqx\") pod \"glance-0edf-account-create-update-drkj7\" (UID: \"8493a4cd-8a10-45f2-a063-4e0ce71de60f\") " pod="openstack/glance-0edf-account-create-update-drkj7" Feb 02 17:30:15 crc kubenswrapper[4858]: I0202 17:30:15.657426 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0edf-account-create-update-drkj7" Feb 02 17:30:15 crc kubenswrapper[4858]: I0202 17:30:15.762149 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-lngkw" podUID="cdbcb8ee-7153-422f-9d1a-05e50dae3abd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.107:5353: connect: connection refused" Feb 02 17:30:15 crc kubenswrapper[4858]: I0202 17:30:15.946485 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b266-account-create-update-zmm9r" event={"ID":"431fd5ca-da9e-4493-acf7-670eb92cf3aa","Type":"ContainerStarted","Data":"d9388121faaf5987db32569578282ee74c81c1ac0cbfcf073caa9f9ba6b4ebeb"} Feb 02 17:30:16 crc kubenswrapper[4858]: W0202 17:30:16.012520 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac79238c_c640_41cc_b4a7_774e06727bb0.slice/crio-5ee0776a554b2d048d0e110d55057233037b67287794470ad95766718b94392b WatchSource:0}: Error finding container 5ee0776a554b2d048d0e110d55057233037b67287794470ad95766718b94392b: Status 404 returned error can't find the container with id 5ee0776a554b2d048d0e110d55057233037b67287794470ad95766718b94392b Feb 02 17:30:16 crc kubenswrapper[4858]: I0202 17:30:16.023900 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-t8l7s"] Feb 02 17:30:16 crc kubenswrapper[4858]: I0202 17:30:16.228143 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-0edf-account-create-update-drkj7"] Feb 02 17:30:16 crc kubenswrapper[4858]: I0202 17:30:16.754901 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-lngkw" Feb 02 17:30:16 crc kubenswrapper[4858]: I0202 17:30:16.807486 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdbcb8ee-7153-422f-9d1a-05e50dae3abd-ovsdbserver-nb\") pod \"cdbcb8ee-7153-422f-9d1a-05e50dae3abd\" (UID: \"cdbcb8ee-7153-422f-9d1a-05e50dae3abd\") " Feb 02 17:30:16 crc kubenswrapper[4858]: I0202 17:30:16.807574 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdk7w\" (UniqueName: \"kubernetes.io/projected/cdbcb8ee-7153-422f-9d1a-05e50dae3abd-kube-api-access-cdk7w\") pod \"cdbcb8ee-7153-422f-9d1a-05e50dae3abd\" (UID: \"cdbcb8ee-7153-422f-9d1a-05e50dae3abd\") " Feb 02 17:30:16 crc kubenswrapper[4858]: I0202 17:30:16.807643 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdbcb8ee-7153-422f-9d1a-05e50dae3abd-config\") pod \"cdbcb8ee-7153-422f-9d1a-05e50dae3abd\" (UID: \"cdbcb8ee-7153-422f-9d1a-05e50dae3abd\") " Feb 02 17:30:16 crc kubenswrapper[4858]: I0202 17:30:16.807675 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdbcb8ee-7153-422f-9d1a-05e50dae3abd-ovsdbserver-sb\") pod \"cdbcb8ee-7153-422f-9d1a-05e50dae3abd\" (UID: \"cdbcb8ee-7153-422f-9d1a-05e50dae3abd\") " Feb 02 17:30:16 crc kubenswrapper[4858]: I0202 17:30:16.807827 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdbcb8ee-7153-422f-9d1a-05e50dae3abd-dns-svc\") pod \"cdbcb8ee-7153-422f-9d1a-05e50dae3abd\" (UID: \"cdbcb8ee-7153-422f-9d1a-05e50dae3abd\") " Feb 02 17:30:16 crc kubenswrapper[4858]: I0202 17:30:16.833155 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdbcb8ee-7153-422f-9d1a-05e50dae3abd-kube-api-access-cdk7w" (OuterVolumeSpecName: "kube-api-access-cdk7w") pod "cdbcb8ee-7153-422f-9d1a-05e50dae3abd" (UID: "cdbcb8ee-7153-422f-9d1a-05e50dae3abd"). InnerVolumeSpecName "kube-api-access-cdk7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:30:16 crc kubenswrapper[4858]: I0202 17:30:16.906394 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-l6jmq"] Feb 02 17:30:16 crc kubenswrapper[4858]: E0202 17:30:16.906693 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdbcb8ee-7153-422f-9d1a-05e50dae3abd" containerName="init" Feb 02 17:30:16 crc kubenswrapper[4858]: I0202 17:30:16.906704 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdbcb8ee-7153-422f-9d1a-05e50dae3abd" containerName="init" Feb 02 17:30:16 crc kubenswrapper[4858]: E0202 17:30:16.906743 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdbcb8ee-7153-422f-9d1a-05e50dae3abd" containerName="dnsmasq-dns" Feb 02 17:30:16 crc kubenswrapper[4858]: I0202 17:30:16.906753 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdbcb8ee-7153-422f-9d1a-05e50dae3abd" containerName="dnsmasq-dns" Feb 02 17:30:16 crc kubenswrapper[4858]: I0202 17:30:16.906916 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdbcb8ee-7153-422f-9d1a-05e50dae3abd" containerName="dnsmasq-dns" Feb 02 17:30:16 crc kubenswrapper[4858]: I0202 17:30:16.917213 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdk7w\" (UniqueName: \"kubernetes.io/projected/cdbcb8ee-7153-422f-9d1a-05e50dae3abd-kube-api-access-cdk7w\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:16 crc kubenswrapper[4858]: I0202 17:30:16.919606 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l6jmq" Feb 02 17:30:16 crc kubenswrapper[4858]: I0202 17:30:16.935367 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 02 17:30:16 crc kubenswrapper[4858]: I0202 17:30:16.962052 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-l6jmq"] Feb 02 17:30:16 crc kubenswrapper[4858]: I0202 17:30:16.988715 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-lngkw" event={"ID":"cdbcb8ee-7153-422f-9d1a-05e50dae3abd","Type":"ContainerDied","Data":"566e5b4051061caedb24fdbf07eb3dfa71e215b71c54aa3be73f1349a6d509cb"} Feb 02 17:30:16 crc kubenswrapper[4858]: I0202 17:30:16.988770 4858 scope.go:117] "RemoveContainer" containerID="fce03b4f1413ee836e9cca3010980f1e53fffedc2068457397fc5a79d1a6a0eb" Feb 02 17:30:16 crc kubenswrapper[4858]: I0202 17:30:16.988883 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-lngkw" Feb 02 17:30:16 crc kubenswrapper[4858]: I0202 17:30:16.998426 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdbcb8ee-7153-422f-9d1a-05e50dae3abd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cdbcb8ee-7153-422f-9d1a-05e50dae3abd" (UID: "cdbcb8ee-7153-422f-9d1a-05e50dae3abd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:30:16 crc kubenswrapper[4858]: I0202 17:30:16.999463 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdbcb8ee-7153-422f-9d1a-05e50dae3abd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cdbcb8ee-7153-422f-9d1a-05e50dae3abd" (UID: "cdbcb8ee-7153-422f-9d1a-05e50dae3abd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:30:17 crc kubenswrapper[4858]: I0202 17:30:17.000774 4858 generic.go:334] "Generic (PLEG): container finished" podID="757ea041-5d2d-4b24-9f27-ca5ee8116763" containerID="ee041bd0bf2041a6b27e8b3f93d821b40926a93ecd373f41251083013bc947ee" exitCode=0 Feb 02 17:30:17 crc kubenswrapper[4858]: I0202 17:30:17.001295 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e76f-account-create-update-ng2zq" event={"ID":"757ea041-5d2d-4b24-9f27-ca5ee8116763","Type":"ContainerDied","Data":"ee041bd0bf2041a6b27e8b3f93d821b40926a93ecd373f41251083013bc947ee"} Feb 02 17:30:17 crc kubenswrapper[4858]: I0202 17:30:17.003823 4858 generic.go:334] "Generic (PLEG): container finished" podID="5350e65d-0f27-4ac0-9251-00ce22348491" containerID="74da3e96d0b80e4d6e3fc0c1005cca38cca18cfe3abed397df82c484dfcc7f1f" exitCode=0 Feb 02 17:30:17 crc kubenswrapper[4858]: I0202 17:30:17.003866 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-6qdtt" event={"ID":"5350e65d-0f27-4ac0-9251-00ce22348491","Type":"ContainerDied","Data":"74da3e96d0b80e4d6e3fc0c1005cca38cca18cfe3abed397df82c484dfcc7f1f"} Feb 02 17:30:17 crc kubenswrapper[4858]: I0202 17:30:17.017124 4858 generic.go:334] "Generic (PLEG): container finished" podID="431fd5ca-da9e-4493-acf7-670eb92cf3aa" containerID="d9388121faaf5987db32569578282ee74c81c1ac0cbfcf073caa9f9ba6b4ebeb" exitCode=0 Feb 02 17:30:17 crc kubenswrapper[4858]: I0202 17:30:17.017193 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b266-account-create-update-zmm9r" event={"ID":"431fd5ca-da9e-4493-acf7-670eb92cf3aa","Type":"ContainerDied","Data":"d9388121faaf5987db32569578282ee74c81c1ac0cbfcf073caa9f9ba6b4ebeb"} Feb 02 17:30:17 crc kubenswrapper[4858]: I0202 17:30:17.018468 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11fbf331-6c70-487c-a70d-4ff2a12f4fb4-operator-scripts\") pod \"root-account-create-update-l6jmq\" (UID: \"11fbf331-6c70-487c-a70d-4ff2a12f4fb4\") " pod="openstack/root-account-create-update-l6jmq" Feb 02 17:30:17 crc kubenswrapper[4858]: I0202 17:30:17.018529 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzrls\" (UniqueName: \"kubernetes.io/projected/11fbf331-6c70-487c-a70d-4ff2a12f4fb4-kube-api-access-qzrls\") pod \"root-account-create-update-l6jmq\" (UID: \"11fbf331-6c70-487c-a70d-4ff2a12f4fb4\") " pod="openstack/root-account-create-update-l6jmq" Feb 02 17:30:17 crc kubenswrapper[4858]: I0202 17:30:17.018601 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdbcb8ee-7153-422f-9d1a-05e50dae3abd-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:17 crc kubenswrapper[4858]: I0202 17:30:17.018615 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdbcb8ee-7153-422f-9d1a-05e50dae3abd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:17 crc kubenswrapper[4858]: I0202 17:30:17.023909 4858 generic.go:334] "Generic (PLEG): container finished" podID="ac79238c-c640-41cc-b4a7-774e06727bb0" containerID="c2a81f5dc1c87004feb9dc6e964a565898639418a7b102cf00fbd3515fa9bd35" exitCode=0 Feb 02 17:30:17 crc kubenswrapper[4858]: I0202 17:30:17.024032 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-t8l7s" event={"ID":"ac79238c-c640-41cc-b4a7-774e06727bb0","Type":"ContainerDied","Data":"c2a81f5dc1c87004feb9dc6e964a565898639418a7b102cf00fbd3515fa9bd35"} Feb 02 17:30:17 crc kubenswrapper[4858]: I0202 17:30:17.024057 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-t8l7s" event={"ID":"ac79238c-c640-41cc-b4a7-774e06727bb0","Type":"ContainerStarted","Data":"5ee0776a554b2d048d0e110d55057233037b67287794470ad95766718b94392b"} Feb 02 17:30:17 crc kubenswrapper[4858]: I0202 17:30:17.026527 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0edf-account-create-update-drkj7" event={"ID":"8493a4cd-8a10-45f2-a063-4e0ce71de60f","Type":"ContainerStarted","Data":"3d338ed9f584c84a4f27b1d5f3c2914b693ba8ec5b1f42a48e8cadeda7e5567d"} Feb 02 17:30:17 crc kubenswrapper[4858]: I0202 17:30:17.028967 4858 generic.go:334] "Generic (PLEG): container finished" podID="17d7410b-4b1f-4a80-ab62-adf84d324b21" containerID="a6dbe3f5701a69a0f031b74e22ad4f6908bfb6e1670d155273ad4679174510fb" exitCode=0 Feb 02 17:30:17 crc kubenswrapper[4858]: I0202 17:30:17.029033 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-ddwfk" event={"ID":"17d7410b-4b1f-4a80-ab62-adf84d324b21","Type":"ContainerDied","Data":"a6dbe3f5701a69a0f031b74e22ad4f6908bfb6e1670d155273ad4679174510fb"} Feb 02 17:30:17 crc kubenswrapper[4858]: I0202 17:30:17.031079 4858 generic.go:334] "Generic (PLEG): container finished" podID="e0e82b54-f2bd-4307-bd7a-f613c0dac23c" containerID="ef2ff70d7a71512ab5695ceefb57ca6f23c8b82e7c6dfce5477a8ba990632089" exitCode=0 Feb 02 17:30:17 crc kubenswrapper[4858]: I0202 17:30:17.031208 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-xf4m2" event={"ID":"e0e82b54-f2bd-4307-bd7a-f613c0dac23c","Type":"ContainerDied","Data":"ef2ff70d7a71512ab5695ceefb57ca6f23c8b82e7c6dfce5477a8ba990632089"} Feb 02 17:30:17 crc kubenswrapper[4858]: I0202 17:30:17.048167 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdbcb8ee-7153-422f-9d1a-05e50dae3abd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cdbcb8ee-7153-422f-9d1a-05e50dae3abd" (UID: "cdbcb8ee-7153-422f-9d1a-05e50dae3abd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:30:17 crc kubenswrapper[4858]: I0202 17:30:17.053607 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdbcb8ee-7153-422f-9d1a-05e50dae3abd-config" (OuterVolumeSpecName: "config") pod "cdbcb8ee-7153-422f-9d1a-05e50dae3abd" (UID: "cdbcb8ee-7153-422f-9d1a-05e50dae3abd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:30:17 crc kubenswrapper[4858]: I0202 17:30:17.106274 4858 scope.go:117] "RemoveContainer" containerID="35be59ffe54209d15a5cfd3adb43a453a46e5fc6771372266735992f6d24fa15" Feb 02 17:30:17 crc kubenswrapper[4858]: I0202 17:30:17.120197 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11fbf331-6c70-487c-a70d-4ff2a12f4fb4-operator-scripts\") pod \"root-account-create-update-l6jmq\" (UID: \"11fbf331-6c70-487c-a70d-4ff2a12f4fb4\") " pod="openstack/root-account-create-update-l6jmq" Feb 02 17:30:17 crc kubenswrapper[4858]: I0202 17:30:17.120317 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzrls\" (UniqueName: \"kubernetes.io/projected/11fbf331-6c70-487c-a70d-4ff2a12f4fb4-kube-api-access-qzrls\") pod \"root-account-create-update-l6jmq\" (UID: \"11fbf331-6c70-487c-a70d-4ff2a12f4fb4\") " pod="openstack/root-account-create-update-l6jmq" Feb 02 17:30:17 crc kubenswrapper[4858]: I0202 17:30:17.120433 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdbcb8ee-7153-422f-9d1a-05e50dae3abd-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:17 crc kubenswrapper[4858]: I0202 17:30:17.120449 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdbcb8ee-7153-422f-9d1a-05e50dae3abd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:17 crc kubenswrapper[4858]: I0202 17:30:17.121904 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11fbf331-6c70-487c-a70d-4ff2a12f4fb4-operator-scripts\") pod \"root-account-create-update-l6jmq\" (UID: \"11fbf331-6c70-487c-a70d-4ff2a12f4fb4\") " pod="openstack/root-account-create-update-l6jmq" Feb 02 17:30:17 crc kubenswrapper[4858]: I0202 17:30:17.137882 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzrls\" (UniqueName: \"kubernetes.io/projected/11fbf331-6c70-487c-a70d-4ff2a12f4fb4-kube-api-access-qzrls\") pod \"root-account-create-update-l6jmq\" (UID: \"11fbf331-6c70-487c-a70d-4ff2a12f4fb4\") " pod="openstack/root-account-create-update-l6jmq" Feb 02 17:30:17 crc kubenswrapper[4858]: I0202 17:30:17.321538 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-lngkw"] Feb 02 17:30:17 crc kubenswrapper[4858]: I0202 17:30:17.328764 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-lngkw"] Feb 02 17:30:17 crc kubenswrapper[4858]: E0202 17:30:17.341530 4858 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 02 17:30:17 crc kubenswrapper[4858]: E0202 17:30:17.341584 4858 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 02 17:30:17 crc kubenswrapper[4858]: E0202 17:30:17.341665 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/703d6256-20d4-45fc-9a4c-ec6970ea250d-etc-swift podName:703d6256-20d4-45fc-9a4c-ec6970ea250d nodeName:}" failed. No retries permitted until 2026-02-02 17:30:21.34164656 +0000 UTC m=+922.494061825 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/703d6256-20d4-45fc-9a4c-ec6970ea250d-etc-swift") pod "swift-storage-0" (UID: "703d6256-20d4-45fc-9a4c-ec6970ea250d") : configmap "swift-ring-files" not found Feb 02 17:30:17 crc kubenswrapper[4858]: I0202 17:30:17.341369 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/703d6256-20d4-45fc-9a4c-ec6970ea250d-etc-swift\") pod \"swift-storage-0\" (UID: \"703d6256-20d4-45fc-9a4c-ec6970ea250d\") " pod="openstack/swift-storage-0" Feb 02 17:30:17 crc kubenswrapper[4858]: I0202 17:30:17.400931 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l6jmq" Feb 02 17:30:18 crc kubenswrapper[4858]: I0202 17:30:18.047248 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-6qdtt" event={"ID":"5350e65d-0f27-4ac0-9251-00ce22348491","Type":"ContainerStarted","Data":"e0d357ff68dfc0c048c53fbd7b229d228bcd0d28988ef48b8396526b0fe205bc"} Feb 02 17:30:18 crc kubenswrapper[4858]: I0202 17:30:18.047589 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-6qdtt" Feb 02 17:30:18 crc kubenswrapper[4858]: I0202 17:30:18.049774 4858 generic.go:334] "Generic (PLEG): container finished" podID="8493a4cd-8a10-45f2-a063-4e0ce71de60f" containerID="44c21ee6f8c701a9fe9b1acd0e84567e27b279fc2646e15b35a2280ace539874" exitCode=0 Feb 02 17:30:18 crc kubenswrapper[4858]: I0202 17:30:18.049892 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0edf-account-create-update-drkj7" event={"ID":"8493a4cd-8a10-45f2-a063-4e0ce71de60f","Type":"ContainerDied","Data":"44c21ee6f8c701a9fe9b1acd0e84567e27b279fc2646e15b35a2280ace539874"} Feb 02 17:30:18 crc kubenswrapper[4858]: I0202 17:30:18.083191 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-6qdtt" podStartSLOduration=6.083166346 podStartE2EDuration="6.083166346s" podCreationTimestamp="2026-02-02 17:30:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:30:18.077127676 +0000 UTC m=+919.229542951" watchObservedRunningTime="2026-02-02 17:30:18.083166346 +0000 UTC m=+919.235581611" Feb 02 17:30:18 crc kubenswrapper[4858]: I0202 17:30:18.430546 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdbcb8ee-7153-422f-9d1a-05e50dae3abd" path="/var/lib/kubelet/pods/cdbcb8ee-7153-422f-9d1a-05e50dae3abd/volumes" Feb 02 17:30:18 crc kubenswrapper[4858]: I0202 17:30:18.615411 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-ddwfk" Feb 02 17:30:18 crc kubenswrapper[4858]: I0202 17:30:18.773069 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkjv6\" (UniqueName: \"kubernetes.io/projected/17d7410b-4b1f-4a80-ab62-adf84d324b21-kube-api-access-jkjv6\") pod \"17d7410b-4b1f-4a80-ab62-adf84d324b21\" (UID: \"17d7410b-4b1f-4a80-ab62-adf84d324b21\") " Feb 02 17:30:18 crc kubenswrapper[4858]: I0202 17:30:18.773200 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17d7410b-4b1f-4a80-ab62-adf84d324b21-operator-scripts\") pod \"17d7410b-4b1f-4a80-ab62-adf84d324b21\" (UID: \"17d7410b-4b1f-4a80-ab62-adf84d324b21\") " Feb 02 17:30:18 crc kubenswrapper[4858]: I0202 17:30:18.773793 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17d7410b-4b1f-4a80-ab62-adf84d324b21-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "17d7410b-4b1f-4a80-ab62-adf84d324b21" (UID: "17d7410b-4b1f-4a80-ab62-adf84d324b21"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:30:18 crc kubenswrapper[4858]: I0202 17:30:18.774057 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17d7410b-4b1f-4a80-ab62-adf84d324b21-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:18 crc kubenswrapper[4858]: I0202 17:30:18.778534 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17d7410b-4b1f-4a80-ab62-adf84d324b21-kube-api-access-jkjv6" (OuterVolumeSpecName: "kube-api-access-jkjv6") pod "17d7410b-4b1f-4a80-ab62-adf84d324b21" (UID: "17d7410b-4b1f-4a80-ab62-adf84d324b21"). InnerVolumeSpecName "kube-api-access-jkjv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:30:18 crc kubenswrapper[4858]: I0202 17:30:18.878308 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkjv6\" (UniqueName: \"kubernetes.io/projected/17d7410b-4b1f-4a80-ab62-adf84d324b21-kube-api-access-jkjv6\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:19 crc kubenswrapper[4858]: I0202 17:30:19.061359 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-ddwfk" event={"ID":"17d7410b-4b1f-4a80-ab62-adf84d324b21","Type":"ContainerDied","Data":"573a6d043e81d0f4df938492d51f1f55cbbbd843e49553867044e5a8dbec3120"} Feb 02 17:30:19 crc kubenswrapper[4858]: I0202 17:30:19.061467 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="573a6d043e81d0f4df938492d51f1f55cbbbd843e49553867044e5a8dbec3120" Feb 02 17:30:19 crc kubenswrapper[4858]: I0202 17:30:19.061512 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-ddwfk" Feb 02 17:30:19 crc kubenswrapper[4858]: I0202 17:30:19.856619 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-xf4m2" Feb 02 17:30:19 crc kubenswrapper[4858]: I0202 17:30:19.857959 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-t8l7s" Feb 02 17:30:19 crc kubenswrapper[4858]: I0202 17:30:19.871447 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0edf-account-create-update-drkj7" Feb 02 17:30:19 crc kubenswrapper[4858]: I0202 17:30:19.911339 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b266-account-create-update-zmm9r" Feb 02 17:30:19 crc kubenswrapper[4858]: I0202 17:30:19.913100 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e76f-account-create-update-ng2zq" Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.000371 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/757ea041-5d2d-4b24-9f27-ca5ee8116763-operator-scripts\") pod \"757ea041-5d2d-4b24-9f27-ca5ee8116763\" (UID: \"757ea041-5d2d-4b24-9f27-ca5ee8116763\") " Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.000941 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8493a4cd-8a10-45f2-a063-4e0ce71de60f-operator-scripts\") pod \"8493a4cd-8a10-45f2-a063-4e0ce71de60f\" (UID: \"8493a4cd-8a10-45f2-a063-4e0ce71de60f\") " Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.001084 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac79238c-c640-41cc-b4a7-774e06727bb0-operator-scripts\") pod \"ac79238c-c640-41cc-b4a7-774e06727bb0\" (UID: \"ac79238c-c640-41cc-b4a7-774e06727bb0\") " Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.001122 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/431fd5ca-da9e-4493-acf7-670eb92cf3aa-operator-scripts\") pod \"431fd5ca-da9e-4493-acf7-670eb92cf3aa\" (UID: \"431fd5ca-da9e-4493-acf7-670eb92cf3aa\") " Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.001310 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkcxg\" (UniqueName: \"kubernetes.io/projected/ac79238c-c640-41cc-b4a7-774e06727bb0-kube-api-access-kkcxg\") pod \"ac79238c-c640-41cc-b4a7-774e06727bb0\" (UID: \"ac79238c-c640-41cc-b4a7-774e06727bb0\") " Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.001358 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0e82b54-f2bd-4307-bd7a-f613c0dac23c-operator-scripts\") pod \"e0e82b54-f2bd-4307-bd7a-f613c0dac23c\" (UID: \"e0e82b54-f2bd-4307-bd7a-f613c0dac23c\") " Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.001381 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4lqx\" (UniqueName: \"kubernetes.io/projected/8493a4cd-8a10-45f2-a063-4e0ce71de60f-kube-api-access-s4lqx\") pod \"8493a4cd-8a10-45f2-a063-4e0ce71de60f\" (UID: \"8493a4cd-8a10-45f2-a063-4e0ce71de60f\") " Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.001423 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwq4k\" (UniqueName: \"kubernetes.io/projected/757ea041-5d2d-4b24-9f27-ca5ee8116763-kube-api-access-bwq4k\") pod \"757ea041-5d2d-4b24-9f27-ca5ee8116763\" (UID: \"757ea041-5d2d-4b24-9f27-ca5ee8116763\") " Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.001531 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/757ea041-5d2d-4b24-9f27-ca5ee8116763-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "757ea041-5d2d-4b24-9f27-ca5ee8116763" (UID: "757ea041-5d2d-4b24-9f27-ca5ee8116763"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.001542 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8493a4cd-8a10-45f2-a063-4e0ce71de60f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8493a4cd-8a10-45f2-a063-4e0ce71de60f" (UID: "8493a4cd-8a10-45f2-a063-4e0ce71de60f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.001568 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrzzt\" (UniqueName: \"kubernetes.io/projected/e0e82b54-f2bd-4307-bd7a-f613c0dac23c-kube-api-access-lrzzt\") pod \"e0e82b54-f2bd-4307-bd7a-f613c0dac23c\" (UID: \"e0e82b54-f2bd-4307-bd7a-f613c0dac23c\") " Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.001694 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7ljx\" (UniqueName: \"kubernetes.io/projected/431fd5ca-da9e-4493-acf7-670eb92cf3aa-kube-api-access-c7ljx\") pod \"431fd5ca-da9e-4493-acf7-670eb92cf3aa\" (UID: \"431fd5ca-da9e-4493-acf7-670eb92cf3aa\") " Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.002457 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac79238c-c640-41cc-b4a7-774e06727bb0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ac79238c-c640-41cc-b4a7-774e06727bb0" (UID: "ac79238c-c640-41cc-b4a7-774e06727bb0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.003114 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/757ea041-5d2d-4b24-9f27-ca5ee8116763-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.003145 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8493a4cd-8a10-45f2-a063-4e0ce71de60f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.003089 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0e82b54-f2bd-4307-bd7a-f613c0dac23c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e0e82b54-f2bd-4307-bd7a-f613c0dac23c" (UID: "e0e82b54-f2bd-4307-bd7a-f613c0dac23c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.003393 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/431fd5ca-da9e-4493-acf7-670eb92cf3aa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "431fd5ca-da9e-4493-acf7-670eb92cf3aa" (UID: "431fd5ca-da9e-4493-acf7-670eb92cf3aa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.006413 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8493a4cd-8a10-45f2-a063-4e0ce71de60f-kube-api-access-s4lqx" (OuterVolumeSpecName: "kube-api-access-s4lqx") pod "8493a4cd-8a10-45f2-a063-4e0ce71de60f" (UID: "8493a4cd-8a10-45f2-a063-4e0ce71de60f"). InnerVolumeSpecName "kube-api-access-s4lqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.006800 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0e82b54-f2bd-4307-bd7a-f613c0dac23c-kube-api-access-lrzzt" (OuterVolumeSpecName: "kube-api-access-lrzzt") pod "e0e82b54-f2bd-4307-bd7a-f613c0dac23c" (UID: "e0e82b54-f2bd-4307-bd7a-f613c0dac23c"). InnerVolumeSpecName "kube-api-access-lrzzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.006861 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac79238c-c640-41cc-b4a7-774e06727bb0-kube-api-access-kkcxg" (OuterVolumeSpecName: "kube-api-access-kkcxg") pod "ac79238c-c640-41cc-b4a7-774e06727bb0" (UID: "ac79238c-c640-41cc-b4a7-774e06727bb0"). InnerVolumeSpecName "kube-api-access-kkcxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.007308 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/431fd5ca-da9e-4493-acf7-670eb92cf3aa-kube-api-access-c7ljx" (OuterVolumeSpecName: "kube-api-access-c7ljx") pod "431fd5ca-da9e-4493-acf7-670eb92cf3aa" (UID: "431fd5ca-da9e-4493-acf7-670eb92cf3aa"). InnerVolumeSpecName "kube-api-access-c7ljx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.008171 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/757ea041-5d2d-4b24-9f27-ca5ee8116763-kube-api-access-bwq4k" (OuterVolumeSpecName: "kube-api-access-bwq4k") pod "757ea041-5d2d-4b24-9f27-ca5ee8116763" (UID: "757ea041-5d2d-4b24-9f27-ca5ee8116763"). InnerVolumeSpecName "kube-api-access-bwq4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.071666 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-t8l7s" event={"ID":"ac79238c-c640-41cc-b4a7-774e06727bb0","Type":"ContainerDied","Data":"5ee0776a554b2d048d0e110d55057233037b67287794470ad95766718b94392b"} Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.071724 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ee0776a554b2d048d0e110d55057233037b67287794470ad95766718b94392b" Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.071782 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-t8l7s" Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.075925 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-44qfs" event={"ID":"bf16bc74-b9cb-4774-b646-a4de84eb4dd9","Type":"ContainerStarted","Data":"3d4d3d32304418e1e3968d3521f38461dbbf07880d5b66b31813ef7197ef094b"} Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.078369 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0edf-account-create-update-drkj7" event={"ID":"8493a4cd-8a10-45f2-a063-4e0ce71de60f","Type":"ContainerDied","Data":"3d338ed9f584c84a4f27b1d5f3c2914b693ba8ec5b1f42a48e8cadeda7e5567d"} Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.078418 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d338ed9f584c84a4f27b1d5f3c2914b693ba8ec5b1f42a48e8cadeda7e5567d" Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.078395 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0edf-account-create-update-drkj7" Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.080137 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-xf4m2" Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.080171 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-xf4m2" event={"ID":"e0e82b54-f2bd-4307-bd7a-f613c0dac23c","Type":"ContainerDied","Data":"09497d4f12803acb834a8be063a8e8d7e9b56c851a64ffaaf8a973c645247352"} Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.080238 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09497d4f12803acb834a8be063a8e8d7e9b56c851a64ffaaf8a973c645247352" Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.081634 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e76f-account-create-update-ng2zq" event={"ID":"757ea041-5d2d-4b24-9f27-ca5ee8116763","Type":"ContainerDied","Data":"1f1e00b765611cf7924c3d42f799490b54cf2d95d9a0e4a18ad9e852301d76b8"} Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.081659 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f1e00b765611cf7924c3d42f799490b54cf2d95d9a0e4a18ad9e852301d76b8" Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.082240 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e76f-account-create-update-ng2zq" Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.086542 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b266-account-create-update-zmm9r" event={"ID":"431fd5ca-da9e-4493-acf7-670eb92cf3aa","Type":"ContainerDied","Data":"7a83d87fb0f231ead300caf2d310b555b07a2e5af27d6f9dd87312b717e23ebe"} Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.086602 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a83d87fb0f231ead300caf2d310b555b07a2e5af27d6f9dd87312b717e23ebe" Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.086653 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b266-account-create-update-zmm9r" Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.104346 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac79238c-c640-41cc-b4a7-774e06727bb0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.104371 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/431fd5ca-da9e-4493-acf7-670eb92cf3aa-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.104383 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkcxg\" (UniqueName: \"kubernetes.io/projected/ac79238c-c640-41cc-b4a7-774e06727bb0-kube-api-access-kkcxg\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.104395 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0e82b54-f2bd-4307-bd7a-f613c0dac23c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.104405 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4lqx\" (UniqueName: \"kubernetes.io/projected/8493a4cd-8a10-45f2-a063-4e0ce71de60f-kube-api-access-s4lqx\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.104416 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwq4k\" (UniqueName: \"kubernetes.io/projected/757ea041-5d2d-4b24-9f27-ca5ee8116763-kube-api-access-bwq4k\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.104425 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrzzt\" (UniqueName: \"kubernetes.io/projected/e0e82b54-f2bd-4307-bd7a-f613c0dac23c-kube-api-access-lrzzt\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.104434 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7ljx\" (UniqueName: \"kubernetes.io/projected/431fd5ca-da9e-4493-acf7-670eb92cf3aa-kube-api-access-c7ljx\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.112605 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-44qfs" podStartSLOduration=2.104392176 podStartE2EDuration="7.112585019s" podCreationTimestamp="2026-02-02 17:30:13 +0000 UTC" firstStartedPulling="2026-02-02 17:30:14.674958679 +0000 UTC m=+915.827373944" lastFinishedPulling="2026-02-02 17:30:19.683151532 +0000 UTC m=+920.835566787" observedRunningTime="2026-02-02 17:30:20.095630391 +0000 UTC m=+921.248045656" watchObservedRunningTime="2026-02-02 17:30:20.112585019 +0000 UTC m=+921.265000284" Feb 02 17:30:20 crc kubenswrapper[4858]: I0202 17:30:20.155795 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-l6jmq"] Feb 02 17:30:21 crc kubenswrapper[4858]: I0202 17:30:21.095693 4858 generic.go:334] "Generic (PLEG): container finished" podID="11fbf331-6c70-487c-a70d-4ff2a12f4fb4" containerID="26f2c9ac1610ea82ece37363db8e837c1be599e607cc975e2fb1ff8dba6ae0f5" exitCode=0 Feb 02 17:30:21 crc kubenswrapper[4858]: I0202 17:30:21.097224 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-l6jmq" event={"ID":"11fbf331-6c70-487c-a70d-4ff2a12f4fb4","Type":"ContainerDied","Data":"26f2c9ac1610ea82ece37363db8e837c1be599e607cc975e2fb1ff8dba6ae0f5"} Feb 02 17:30:21 crc kubenswrapper[4858]: I0202 17:30:21.097251 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-l6jmq" event={"ID":"11fbf331-6c70-487c-a70d-4ff2a12f4fb4","Type":"ContainerStarted","Data":"adc6aeab3d993062f96f178efb2740229bfab1f9024a97dd2fa9e415ad864914"} Feb 02 17:30:21 crc kubenswrapper[4858]: I0202 17:30:21.424686 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/703d6256-20d4-45fc-9a4c-ec6970ea250d-etc-swift\") pod \"swift-storage-0\" (UID: \"703d6256-20d4-45fc-9a4c-ec6970ea250d\") " pod="openstack/swift-storage-0" Feb 02 17:30:21 crc kubenswrapper[4858]: E0202 17:30:21.425727 4858 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 02 17:30:21 crc kubenswrapper[4858]: E0202 17:30:21.425766 4858 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 02 17:30:21 crc kubenswrapper[4858]: E0202 17:30:21.425822 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/703d6256-20d4-45fc-9a4c-ec6970ea250d-etc-swift podName:703d6256-20d4-45fc-9a4c-ec6970ea250d nodeName:}" failed. No retries permitted until 2026-02-02 17:30:29.425801813 +0000 UTC m=+930.578217078 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/703d6256-20d4-45fc-9a4c-ec6970ea250d-etc-swift") pod "swift-storage-0" (UID: "703d6256-20d4-45fc-9a4c-ec6970ea250d") : configmap "swift-ring-files" not found Feb 02 17:30:22 crc kubenswrapper[4858]: I0202 17:30:22.592399 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l6jmq" Feb 02 17:30:22 crc kubenswrapper[4858]: I0202 17:30:22.749170 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11fbf331-6c70-487c-a70d-4ff2a12f4fb4-operator-scripts\") pod \"11fbf331-6c70-487c-a70d-4ff2a12f4fb4\" (UID: \"11fbf331-6c70-487c-a70d-4ff2a12f4fb4\") " Feb 02 17:30:22 crc kubenswrapper[4858]: I0202 17:30:22.749626 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzrls\" (UniqueName: \"kubernetes.io/projected/11fbf331-6c70-487c-a70d-4ff2a12f4fb4-kube-api-access-qzrls\") pod \"11fbf331-6c70-487c-a70d-4ff2a12f4fb4\" (UID: \"11fbf331-6c70-487c-a70d-4ff2a12f4fb4\") " Feb 02 17:30:22 crc kubenswrapper[4858]: I0202 17:30:22.750713 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11fbf331-6c70-487c-a70d-4ff2a12f4fb4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "11fbf331-6c70-487c-a70d-4ff2a12f4fb4" (UID: "11fbf331-6c70-487c-a70d-4ff2a12f4fb4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:30:22 crc kubenswrapper[4858]: I0202 17:30:22.756474 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11fbf331-6c70-487c-a70d-4ff2a12f4fb4-kube-api-access-qzrls" (OuterVolumeSpecName: "kube-api-access-qzrls") pod "11fbf331-6c70-487c-a70d-4ff2a12f4fb4" (UID: "11fbf331-6c70-487c-a70d-4ff2a12f4fb4"). InnerVolumeSpecName "kube-api-access-qzrls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:30:22 crc kubenswrapper[4858]: I0202 17:30:22.777106 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-6qdtt" Feb 02 17:30:22 crc kubenswrapper[4858]: I0202 17:30:22.849202 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4sv2z"] Feb 02 17:30:22 crc kubenswrapper[4858]: I0202 17:30:22.849450 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-4sv2z" podUID="8094eb3b-7f98-407b-8e5d-551ef561716b" containerName="dnsmasq-dns" containerID="cri-o://baf2e236d5af896009e59c4cc2c80a1b43db2102b7512cffa0b4173555175a20" gracePeriod=10 Feb 02 17:30:22 crc kubenswrapper[4858]: I0202 17:30:22.851815 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzrls\" (UniqueName: \"kubernetes.io/projected/11fbf331-6c70-487c-a70d-4ff2a12f4fb4-kube-api-access-qzrls\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:22 crc kubenswrapper[4858]: I0202 17:30:22.851856 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11fbf331-6c70-487c-a70d-4ff2a12f4fb4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:23 crc kubenswrapper[4858]: I0202 17:30:23.114932 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-l6jmq" event={"ID":"11fbf331-6c70-487c-a70d-4ff2a12f4fb4","Type":"ContainerDied","Data":"adc6aeab3d993062f96f178efb2740229bfab1f9024a97dd2fa9e415ad864914"} Feb 02 17:30:23 crc kubenswrapper[4858]: I0202 17:30:23.115300 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adc6aeab3d993062f96f178efb2740229bfab1f9024a97dd2fa9e415ad864914" Feb 02 17:30:23 crc kubenswrapper[4858]: I0202 17:30:23.115031 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l6jmq" Feb 02 17:30:23 crc kubenswrapper[4858]: I0202 17:30:23.116671 4858 generic.go:334] "Generic (PLEG): container finished" podID="8094eb3b-7f98-407b-8e5d-551ef561716b" containerID="baf2e236d5af896009e59c4cc2c80a1b43db2102b7512cffa0b4173555175a20" exitCode=0 Feb 02 17:30:23 crc kubenswrapper[4858]: I0202 17:30:23.116718 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-4sv2z" event={"ID":"8094eb3b-7f98-407b-8e5d-551ef561716b","Type":"ContainerDied","Data":"baf2e236d5af896009e59c4cc2c80a1b43db2102b7512cffa0b4173555175a20"} Feb 02 17:30:23 crc kubenswrapper[4858]: I0202 17:30:23.287307 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-4sv2z" Feb 02 17:30:23 crc kubenswrapper[4858]: I0202 17:30:23.366947 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8094eb3b-7f98-407b-8e5d-551ef561716b-config\") pod \"8094eb3b-7f98-407b-8e5d-551ef561716b\" (UID: \"8094eb3b-7f98-407b-8e5d-551ef561716b\") " Feb 02 17:30:23 crc kubenswrapper[4858]: I0202 17:30:23.367231 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9g6bk\" (UniqueName: \"kubernetes.io/projected/8094eb3b-7f98-407b-8e5d-551ef561716b-kube-api-access-9g6bk\") pod \"8094eb3b-7f98-407b-8e5d-551ef561716b\" (UID: \"8094eb3b-7f98-407b-8e5d-551ef561716b\") " Feb 02 17:30:23 crc kubenswrapper[4858]: I0202 17:30:23.367385 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8094eb3b-7f98-407b-8e5d-551ef561716b-dns-svc\") pod \"8094eb3b-7f98-407b-8e5d-551ef561716b\" (UID: \"8094eb3b-7f98-407b-8e5d-551ef561716b\") " Feb 02 17:30:23 crc kubenswrapper[4858]: I0202 17:30:23.372345 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8094eb3b-7f98-407b-8e5d-551ef561716b-kube-api-access-9g6bk" (OuterVolumeSpecName: "kube-api-access-9g6bk") pod "8094eb3b-7f98-407b-8e5d-551ef561716b" (UID: "8094eb3b-7f98-407b-8e5d-551ef561716b"). InnerVolumeSpecName "kube-api-access-9g6bk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:30:23 crc kubenswrapper[4858]: I0202 17:30:23.412052 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8094eb3b-7f98-407b-8e5d-551ef561716b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8094eb3b-7f98-407b-8e5d-551ef561716b" (UID: "8094eb3b-7f98-407b-8e5d-551ef561716b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:30:23 crc kubenswrapper[4858]: I0202 17:30:23.418386 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8094eb3b-7f98-407b-8e5d-551ef561716b-config" (OuterVolumeSpecName: "config") pod "8094eb3b-7f98-407b-8e5d-551ef561716b" (UID: "8094eb3b-7f98-407b-8e5d-551ef561716b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:30:23 crc kubenswrapper[4858]: I0202 17:30:23.470256 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8094eb3b-7f98-407b-8e5d-551ef561716b-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:23 crc kubenswrapper[4858]: I0202 17:30:23.470314 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9g6bk\" (UniqueName: \"kubernetes.io/projected/8094eb3b-7f98-407b-8e5d-551ef561716b-kube-api-access-9g6bk\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:23 crc kubenswrapper[4858]: I0202 17:30:23.470335 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8094eb3b-7f98-407b-8e5d-551ef561716b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:24 crc kubenswrapper[4858]: I0202 17:30:24.138213 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-4sv2z" event={"ID":"8094eb3b-7f98-407b-8e5d-551ef561716b","Type":"ContainerDied","Data":"a8fee14c5d8b94b0131e0c5d32de892bf4a6af442d4ce6727e25a3ba76f4bab9"} Feb 02 17:30:24 crc kubenswrapper[4858]: I0202 17:30:24.138587 4858 scope.go:117] "RemoveContainer" containerID="baf2e236d5af896009e59c4cc2c80a1b43db2102b7512cffa0b4173555175a20" Feb 02 17:30:24 crc kubenswrapper[4858]: I0202 17:30:24.138325 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-4sv2z" Feb 02 17:30:24 crc kubenswrapper[4858]: I0202 17:30:24.161198 4858 scope.go:117] "RemoveContainer" containerID="00e7d4d295a15320f7119126c7d643a79c58e0e6f2ea7a939217061106c47abb" Feb 02 17:30:24 crc kubenswrapper[4858]: I0202 17:30:24.178772 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4sv2z"] Feb 02 17:30:24 crc kubenswrapper[4858]: I0202 17:30:24.187396 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4sv2z"] Feb 02 17:30:24 crc kubenswrapper[4858]: I0202 17:30:24.412629 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8094eb3b-7f98-407b-8e5d-551ef561716b" path="/var/lib/kubelet/pods/8094eb3b-7f98-407b-8e5d-551ef561716b/volumes" Feb 02 17:30:25 crc kubenswrapper[4858]: I0202 17:30:25.458589 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-cv9mb"] Feb 02 17:30:25 crc kubenswrapper[4858]: E0202 17:30:25.459241 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11fbf331-6c70-487c-a70d-4ff2a12f4fb4" containerName="mariadb-account-create-update" Feb 02 17:30:25 crc kubenswrapper[4858]: I0202 17:30:25.459263 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="11fbf331-6c70-487c-a70d-4ff2a12f4fb4" containerName="mariadb-account-create-update" Feb 02 17:30:25 crc kubenswrapper[4858]: E0202 17:30:25.459277 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="431fd5ca-da9e-4493-acf7-670eb92cf3aa" containerName="mariadb-account-create-update" Feb 02 17:30:25 crc kubenswrapper[4858]: I0202 17:30:25.459286 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="431fd5ca-da9e-4493-acf7-670eb92cf3aa" containerName="mariadb-account-create-update" Feb 02 17:30:25 crc kubenswrapper[4858]: E0202 17:30:25.459297 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0e82b54-f2bd-4307-bd7a-f613c0dac23c" containerName="mariadb-database-create" Feb 02 17:30:25 crc kubenswrapper[4858]: I0202 17:30:25.459304 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0e82b54-f2bd-4307-bd7a-f613c0dac23c" containerName="mariadb-database-create" Feb 02 17:30:25 crc kubenswrapper[4858]: E0202 17:30:25.459324 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8094eb3b-7f98-407b-8e5d-551ef561716b" containerName="init" Feb 02 17:30:25 crc kubenswrapper[4858]: I0202 17:30:25.459332 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8094eb3b-7f98-407b-8e5d-551ef561716b" containerName="init" Feb 02 17:30:25 crc kubenswrapper[4858]: E0202 17:30:25.459347 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac79238c-c640-41cc-b4a7-774e06727bb0" containerName="mariadb-database-create" Feb 02 17:30:25 crc kubenswrapper[4858]: I0202 17:30:25.459356 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac79238c-c640-41cc-b4a7-774e06727bb0" containerName="mariadb-database-create" Feb 02 17:30:25 crc kubenswrapper[4858]: E0202 17:30:25.459379 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8094eb3b-7f98-407b-8e5d-551ef561716b" containerName="dnsmasq-dns" Feb 02 17:30:25 crc kubenswrapper[4858]: I0202 17:30:25.459386 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8094eb3b-7f98-407b-8e5d-551ef561716b" containerName="dnsmasq-dns" Feb 02 17:30:25 crc kubenswrapper[4858]: E0202 17:30:25.459401 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8493a4cd-8a10-45f2-a063-4e0ce71de60f" containerName="mariadb-account-create-update" Feb 02 17:30:25 crc kubenswrapper[4858]: I0202 17:30:25.459409 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8493a4cd-8a10-45f2-a063-4e0ce71de60f" containerName="mariadb-account-create-update" Feb 02 17:30:25 crc kubenswrapper[4858]: E0202 17:30:25.459424 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="757ea041-5d2d-4b24-9f27-ca5ee8116763" containerName="mariadb-account-create-update" Feb 02 17:30:25 crc kubenswrapper[4858]: I0202 17:30:25.459431 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="757ea041-5d2d-4b24-9f27-ca5ee8116763" containerName="mariadb-account-create-update" Feb 02 17:30:25 crc kubenswrapper[4858]: E0202 17:30:25.459448 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17d7410b-4b1f-4a80-ab62-adf84d324b21" containerName="mariadb-database-create" Feb 02 17:30:25 crc kubenswrapper[4858]: I0202 17:30:25.459457 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="17d7410b-4b1f-4a80-ab62-adf84d324b21" containerName="mariadb-database-create" Feb 02 17:30:25 crc kubenswrapper[4858]: I0202 17:30:25.459629 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="17d7410b-4b1f-4a80-ab62-adf84d324b21" containerName="mariadb-database-create" Feb 02 17:30:25 crc kubenswrapper[4858]: I0202 17:30:25.459649 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="11fbf331-6c70-487c-a70d-4ff2a12f4fb4" containerName="mariadb-account-create-update" Feb 02 17:30:25 crc kubenswrapper[4858]: I0202 17:30:25.459667 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac79238c-c640-41cc-b4a7-774e06727bb0" containerName="mariadb-database-create" Feb 02 17:30:25 crc kubenswrapper[4858]: I0202 17:30:25.459680 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0e82b54-f2bd-4307-bd7a-f613c0dac23c" containerName="mariadb-database-create" Feb 02 17:30:25 crc kubenswrapper[4858]: I0202 17:30:25.459694 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="8094eb3b-7f98-407b-8e5d-551ef561716b" containerName="dnsmasq-dns" Feb 02 17:30:25 crc kubenswrapper[4858]: I0202 17:30:25.459704 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="8493a4cd-8a10-45f2-a063-4e0ce71de60f" containerName="mariadb-account-create-update" Feb 02 17:30:25 crc kubenswrapper[4858]: I0202 17:30:25.459715 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="431fd5ca-da9e-4493-acf7-670eb92cf3aa" containerName="mariadb-account-create-update" Feb 02 17:30:25 crc kubenswrapper[4858]: I0202 17:30:25.459725 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="757ea041-5d2d-4b24-9f27-ca5ee8116763" containerName="mariadb-account-create-update" Feb 02 17:30:25 crc kubenswrapper[4858]: I0202 17:30:25.460381 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-cv9mb" Feb 02 17:30:25 crc kubenswrapper[4858]: I0202 17:30:25.462574 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-dfr7p" Feb 02 17:30:25 crc kubenswrapper[4858]: I0202 17:30:25.463436 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 02 17:30:25 crc kubenswrapper[4858]: I0202 17:30:25.473712 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-cv9mb"] Feb 02 17:30:25 crc kubenswrapper[4858]: I0202 17:30:25.607132 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cb04b10-2484-4c41-903c-d12ea9ab3600-combined-ca-bundle\") pod \"glance-db-sync-cv9mb\" (UID: \"5cb04b10-2484-4c41-903c-d12ea9ab3600\") " pod="openstack/glance-db-sync-cv9mb" Feb 02 17:30:25 crc kubenswrapper[4858]: I0202 17:30:25.607454 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cb04b10-2484-4c41-903c-d12ea9ab3600-config-data\") pod \"glance-db-sync-cv9mb\" (UID: \"5cb04b10-2484-4c41-903c-d12ea9ab3600\") " pod="openstack/glance-db-sync-cv9mb" Feb 02 17:30:25 crc kubenswrapper[4858]: I0202 17:30:25.607603 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6krd\" (UniqueName: \"kubernetes.io/projected/5cb04b10-2484-4c41-903c-d12ea9ab3600-kube-api-access-c6krd\") pod \"glance-db-sync-cv9mb\" (UID: \"5cb04b10-2484-4c41-903c-d12ea9ab3600\") " pod="openstack/glance-db-sync-cv9mb" Feb 02 17:30:25 crc kubenswrapper[4858]: I0202 17:30:25.607756 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5cb04b10-2484-4c41-903c-d12ea9ab3600-db-sync-config-data\") pod \"glance-db-sync-cv9mb\" (UID: \"5cb04b10-2484-4c41-903c-d12ea9ab3600\") " pod="openstack/glance-db-sync-cv9mb" Feb 02 17:30:25 crc kubenswrapper[4858]: I0202 17:30:25.709895 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cb04b10-2484-4c41-903c-d12ea9ab3600-combined-ca-bundle\") pod \"glance-db-sync-cv9mb\" (UID: \"5cb04b10-2484-4c41-903c-d12ea9ab3600\") " pod="openstack/glance-db-sync-cv9mb" Feb 02 17:30:25 crc kubenswrapper[4858]: I0202 17:30:25.709963 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cb04b10-2484-4c41-903c-d12ea9ab3600-config-data\") pod \"glance-db-sync-cv9mb\" (UID: \"5cb04b10-2484-4c41-903c-d12ea9ab3600\") " pod="openstack/glance-db-sync-cv9mb" Feb 02 17:30:25 crc kubenswrapper[4858]: I0202 17:30:25.710004 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6krd\" (UniqueName: \"kubernetes.io/projected/5cb04b10-2484-4c41-903c-d12ea9ab3600-kube-api-access-c6krd\") pod \"glance-db-sync-cv9mb\" (UID: \"5cb04b10-2484-4c41-903c-d12ea9ab3600\") " pod="openstack/glance-db-sync-cv9mb" Feb 02 17:30:25 crc kubenswrapper[4858]: I0202 17:30:25.710044 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5cb04b10-2484-4c41-903c-d12ea9ab3600-db-sync-config-data\") pod \"glance-db-sync-cv9mb\" (UID: \"5cb04b10-2484-4c41-903c-d12ea9ab3600\") " pod="openstack/glance-db-sync-cv9mb" Feb 02 17:30:25 crc kubenswrapper[4858]: I0202 17:30:25.718659 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5cb04b10-2484-4c41-903c-d12ea9ab3600-db-sync-config-data\") pod \"glance-db-sync-cv9mb\" (UID: \"5cb04b10-2484-4c41-903c-d12ea9ab3600\") " pod="openstack/glance-db-sync-cv9mb" Feb 02 17:30:25 crc kubenswrapper[4858]: I0202 17:30:25.732204 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cb04b10-2484-4c41-903c-d12ea9ab3600-combined-ca-bundle\") pod \"glance-db-sync-cv9mb\" (UID: \"5cb04b10-2484-4c41-903c-d12ea9ab3600\") " pod="openstack/glance-db-sync-cv9mb" Feb 02 17:30:25 crc kubenswrapper[4858]: I0202 17:30:25.732554 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cb04b10-2484-4c41-903c-d12ea9ab3600-config-data\") pod \"glance-db-sync-cv9mb\" (UID: \"5cb04b10-2484-4c41-903c-d12ea9ab3600\") " pod="openstack/glance-db-sync-cv9mb" Feb 02 17:30:25 crc kubenswrapper[4858]: I0202 17:30:25.741341 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6krd\" (UniqueName: \"kubernetes.io/projected/5cb04b10-2484-4c41-903c-d12ea9ab3600-kube-api-access-c6krd\") pod \"glance-db-sync-cv9mb\" (UID: \"5cb04b10-2484-4c41-903c-d12ea9ab3600\") " pod="openstack/glance-db-sync-cv9mb" Feb 02 17:30:25 crc kubenswrapper[4858]: I0202 17:30:25.758562 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 02 17:30:25 crc kubenswrapper[4858]: I0202 17:30:25.822507 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-cv9mb" Feb 02 17:30:26 crc kubenswrapper[4858]: I0202 17:30:26.368231 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-cv9mb"] Feb 02 17:30:27 crc kubenswrapper[4858]: I0202 17:30:27.166532 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-cv9mb" event={"ID":"5cb04b10-2484-4c41-903c-d12ea9ab3600","Type":"ContainerStarted","Data":"de14d061bf892d7dbc63ba93e047bb51d209b2fa0235f5e0ba4618142a2cc9d7"} Feb 02 17:30:27 crc kubenswrapper[4858]: I0202 17:30:27.168697 4858 generic.go:334] "Generic (PLEG): container finished" podID="bf16bc74-b9cb-4774-b646-a4de84eb4dd9" containerID="3d4d3d32304418e1e3968d3521f38461dbbf07880d5b66b31813ef7197ef094b" exitCode=0 Feb 02 17:30:27 crc kubenswrapper[4858]: I0202 17:30:27.168747 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-44qfs" event={"ID":"bf16bc74-b9cb-4774-b646-a4de84eb4dd9","Type":"ContainerDied","Data":"3d4d3d32304418e1e3968d3521f38461dbbf07880d5b66b31813ef7197ef094b"} Feb 02 17:30:28 crc kubenswrapper[4858]: I0202 17:30:28.120655 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-l6jmq"] Feb 02 17:30:28 crc kubenswrapper[4858]: I0202 17:30:28.128558 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-l6jmq"] Feb 02 17:30:28 crc kubenswrapper[4858]: I0202 17:30:28.412355 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11fbf331-6c70-487c-a70d-4ff2a12f4fb4" path="/var/lib/kubelet/pods/11fbf331-6c70-487c-a70d-4ff2a12f4fb4/volumes" Feb 02 17:30:28 crc kubenswrapper[4858]: I0202 17:30:28.676038 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-44qfs" Feb 02 17:30:28 crc kubenswrapper[4858]: I0202 17:30:28.764674 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpmm2\" (UniqueName: \"kubernetes.io/projected/bf16bc74-b9cb-4774-b646-a4de84eb4dd9-kube-api-access-tpmm2\") pod \"bf16bc74-b9cb-4774-b646-a4de84eb4dd9\" (UID: \"bf16bc74-b9cb-4774-b646-a4de84eb4dd9\") " Feb 02 17:30:28 crc kubenswrapper[4858]: I0202 17:30:28.765241 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf16bc74-b9cb-4774-b646-a4de84eb4dd9-scripts\") pod \"bf16bc74-b9cb-4774-b646-a4de84eb4dd9\" (UID: \"bf16bc74-b9cb-4774-b646-a4de84eb4dd9\") " Feb 02 17:30:28 crc kubenswrapper[4858]: I0202 17:30:28.765280 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bf16bc74-b9cb-4774-b646-a4de84eb4dd9-swiftconf\") pod \"bf16bc74-b9cb-4774-b646-a4de84eb4dd9\" (UID: \"bf16bc74-b9cb-4774-b646-a4de84eb4dd9\") " Feb 02 17:30:28 crc kubenswrapper[4858]: I0202 17:30:28.765424 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bf16bc74-b9cb-4774-b646-a4de84eb4dd9-ring-data-devices\") pod \"bf16bc74-b9cb-4774-b646-a4de84eb4dd9\" (UID: \"bf16bc74-b9cb-4774-b646-a4de84eb4dd9\") " Feb 02 17:30:28 crc kubenswrapper[4858]: I0202 17:30:28.765454 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf16bc74-b9cb-4774-b646-a4de84eb4dd9-combined-ca-bundle\") pod \"bf16bc74-b9cb-4774-b646-a4de84eb4dd9\" (UID: \"bf16bc74-b9cb-4774-b646-a4de84eb4dd9\") " Feb 02 17:30:28 crc kubenswrapper[4858]: I0202 17:30:28.765500 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bf16bc74-b9cb-4774-b646-a4de84eb4dd9-dispersionconf\") pod \"bf16bc74-b9cb-4774-b646-a4de84eb4dd9\" (UID: \"bf16bc74-b9cb-4774-b646-a4de84eb4dd9\") " Feb 02 17:30:28 crc kubenswrapper[4858]: I0202 17:30:28.765550 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bf16bc74-b9cb-4774-b646-a4de84eb4dd9-etc-swift\") pod \"bf16bc74-b9cb-4774-b646-a4de84eb4dd9\" (UID: \"bf16bc74-b9cb-4774-b646-a4de84eb4dd9\") " Feb 02 17:30:28 crc kubenswrapper[4858]: I0202 17:30:28.766612 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf16bc74-b9cb-4774-b646-a4de84eb4dd9-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "bf16bc74-b9cb-4774-b646-a4de84eb4dd9" (UID: "bf16bc74-b9cb-4774-b646-a4de84eb4dd9"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:30:28 crc kubenswrapper[4858]: I0202 17:30:28.766827 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf16bc74-b9cb-4774-b646-a4de84eb4dd9-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "bf16bc74-b9cb-4774-b646-a4de84eb4dd9" (UID: "bf16bc74-b9cb-4774-b646-a4de84eb4dd9"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:30:28 crc kubenswrapper[4858]: I0202 17:30:28.773182 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf16bc74-b9cb-4774-b646-a4de84eb4dd9-kube-api-access-tpmm2" (OuterVolumeSpecName: "kube-api-access-tpmm2") pod "bf16bc74-b9cb-4774-b646-a4de84eb4dd9" (UID: "bf16bc74-b9cb-4774-b646-a4de84eb4dd9"). InnerVolumeSpecName "kube-api-access-tpmm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:30:28 crc kubenswrapper[4858]: I0202 17:30:28.775736 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf16bc74-b9cb-4774-b646-a4de84eb4dd9-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "bf16bc74-b9cb-4774-b646-a4de84eb4dd9" (UID: "bf16bc74-b9cb-4774-b646-a4de84eb4dd9"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:30:28 crc kubenswrapper[4858]: I0202 17:30:28.791815 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf16bc74-b9cb-4774-b646-a4de84eb4dd9-scripts" (OuterVolumeSpecName: "scripts") pod "bf16bc74-b9cb-4774-b646-a4de84eb4dd9" (UID: "bf16bc74-b9cb-4774-b646-a4de84eb4dd9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:30:28 crc kubenswrapper[4858]: I0202 17:30:28.795754 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf16bc74-b9cb-4774-b646-a4de84eb4dd9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf16bc74-b9cb-4774-b646-a4de84eb4dd9" (UID: "bf16bc74-b9cb-4774-b646-a4de84eb4dd9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:30:28 crc kubenswrapper[4858]: I0202 17:30:28.798236 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf16bc74-b9cb-4774-b646-a4de84eb4dd9-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "bf16bc74-b9cb-4774-b646-a4de84eb4dd9" (UID: "bf16bc74-b9cb-4774-b646-a4de84eb4dd9"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:30:28 crc kubenswrapper[4858]: I0202 17:30:28.868312 4858 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bf16bc74-b9cb-4774-b646-a4de84eb4dd9-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:28 crc kubenswrapper[4858]: I0202 17:30:28.868350 4858 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bf16bc74-b9cb-4774-b646-a4de84eb4dd9-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:28 crc kubenswrapper[4858]: I0202 17:30:28.868360 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpmm2\" (UniqueName: \"kubernetes.io/projected/bf16bc74-b9cb-4774-b646-a4de84eb4dd9-kube-api-access-tpmm2\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:28 crc kubenswrapper[4858]: I0202 17:30:28.868374 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf16bc74-b9cb-4774-b646-a4de84eb4dd9-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:28 crc kubenswrapper[4858]: I0202 17:30:28.868387 4858 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bf16bc74-b9cb-4774-b646-a4de84eb4dd9-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:28 crc kubenswrapper[4858]: I0202 17:30:28.868398 4858 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bf16bc74-b9cb-4774-b646-a4de84eb4dd9-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:28 crc kubenswrapper[4858]: I0202 17:30:28.868409 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf16bc74-b9cb-4774-b646-a4de84eb4dd9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:29 crc kubenswrapper[4858]: I0202 17:30:29.201356 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-44qfs" event={"ID":"bf16bc74-b9cb-4774-b646-a4de84eb4dd9","Type":"ContainerDied","Data":"4443124c5bd466c60b396eaac3d110c6ade5b451ef2de30daad877554d668091"} Feb 02 17:30:29 crc kubenswrapper[4858]: I0202 17:30:29.201426 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4443124c5bd466c60b396eaac3d110c6ade5b451ef2de30daad877554d668091" Feb 02 17:30:29 crc kubenswrapper[4858]: I0202 17:30:29.201474 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-44qfs" Feb 02 17:30:29 crc kubenswrapper[4858]: I0202 17:30:29.478908 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/703d6256-20d4-45fc-9a4c-ec6970ea250d-etc-swift\") pod \"swift-storage-0\" (UID: \"703d6256-20d4-45fc-9a4c-ec6970ea250d\") " pod="openstack/swift-storage-0" Feb 02 17:30:29 crc kubenswrapper[4858]: I0202 17:30:29.484862 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/703d6256-20d4-45fc-9a4c-ec6970ea250d-etc-swift\") pod \"swift-storage-0\" (UID: \"703d6256-20d4-45fc-9a4c-ec6970ea250d\") " pod="openstack/swift-storage-0" Feb 02 17:30:29 crc kubenswrapper[4858]: I0202 17:30:29.501872 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 02 17:30:30 crc kubenswrapper[4858]: I0202 17:30:30.086127 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 02 17:30:30 crc kubenswrapper[4858]: W0202 17:30:30.099148 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod703d6256_20d4_45fc_9a4c_ec6970ea250d.slice/crio-04f3f396b2711152ad5aa3b49a11f67ebb98cf45f0b00dd5dec9932b02c949a8 WatchSource:0}: Error finding container 04f3f396b2711152ad5aa3b49a11f67ebb98cf45f0b00dd5dec9932b02c949a8: Status 404 returned error can't find the container with id 04f3f396b2711152ad5aa3b49a11f67ebb98cf45f0b00dd5dec9932b02c949a8 Feb 02 17:30:30 crc kubenswrapper[4858]: I0202 17:30:30.212118 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"703d6256-20d4-45fc-9a4c-ec6970ea250d","Type":"ContainerStarted","Data":"04f3f396b2711152ad5aa3b49a11f67ebb98cf45f0b00dd5dec9932b02c949a8"} Feb 02 17:30:30 crc kubenswrapper[4858]: I0202 17:30:30.217531 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-h6kmt" podUID="334dab9b-9793-4424-9c39-27eac5f07626" containerName="ovn-controller" probeResult="failure" output=< Feb 02 17:30:30 crc kubenswrapper[4858]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 02 17:30:30 crc kubenswrapper[4858]: > Feb 02 17:30:30 crc kubenswrapper[4858]: I0202 17:30:30.267317 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vpfb7"] Feb 02 17:30:30 crc kubenswrapper[4858]: E0202 17:30:30.267678 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf16bc74-b9cb-4774-b646-a4de84eb4dd9" containerName="swift-ring-rebalance" Feb 02 17:30:30 crc kubenswrapper[4858]: I0202 17:30:30.267701 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf16bc74-b9cb-4774-b646-a4de84eb4dd9" containerName="swift-ring-rebalance" Feb 02 17:30:30 crc kubenswrapper[4858]: I0202 17:30:30.267868 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf16bc74-b9cb-4774-b646-a4de84eb4dd9" containerName="swift-ring-rebalance" Feb 02 17:30:30 crc kubenswrapper[4858]: I0202 17:30:30.269042 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vpfb7" Feb 02 17:30:30 crc kubenswrapper[4858]: I0202 17:30:30.279828 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vpfb7"] Feb 02 17:30:30 crc kubenswrapper[4858]: I0202 17:30:30.398920 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/920ebb7d-135e-4b6b-9f23-4649a33e8896-catalog-content\") pod \"redhat-operators-vpfb7\" (UID: \"920ebb7d-135e-4b6b-9f23-4649a33e8896\") " pod="openshift-marketplace/redhat-operators-vpfb7" Feb 02 17:30:30 crc kubenswrapper[4858]: I0202 17:30:30.399005 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/920ebb7d-135e-4b6b-9f23-4649a33e8896-utilities\") pod \"redhat-operators-vpfb7\" (UID: \"920ebb7d-135e-4b6b-9f23-4649a33e8896\") " pod="openshift-marketplace/redhat-operators-vpfb7" Feb 02 17:30:30 crc kubenswrapper[4858]: I0202 17:30:30.399053 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krfb8\" (UniqueName: \"kubernetes.io/projected/920ebb7d-135e-4b6b-9f23-4649a33e8896-kube-api-access-krfb8\") pod \"redhat-operators-vpfb7\" (UID: \"920ebb7d-135e-4b6b-9f23-4649a33e8896\") " pod="openshift-marketplace/redhat-operators-vpfb7" Feb 02 17:30:30 crc kubenswrapper[4858]: I0202 17:30:30.500796 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/920ebb7d-135e-4b6b-9f23-4649a33e8896-catalog-content\") pod \"redhat-operators-vpfb7\" (UID: \"920ebb7d-135e-4b6b-9f23-4649a33e8896\") " pod="openshift-marketplace/redhat-operators-vpfb7" Feb 02 17:30:30 crc kubenswrapper[4858]: I0202 17:30:30.500866 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/920ebb7d-135e-4b6b-9f23-4649a33e8896-utilities\") pod \"redhat-operators-vpfb7\" (UID: \"920ebb7d-135e-4b6b-9f23-4649a33e8896\") " pod="openshift-marketplace/redhat-operators-vpfb7" Feb 02 17:30:30 crc kubenswrapper[4858]: I0202 17:30:30.500905 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krfb8\" (UniqueName: \"kubernetes.io/projected/920ebb7d-135e-4b6b-9f23-4649a33e8896-kube-api-access-krfb8\") pod \"redhat-operators-vpfb7\" (UID: \"920ebb7d-135e-4b6b-9f23-4649a33e8896\") " pod="openshift-marketplace/redhat-operators-vpfb7" Feb 02 17:30:30 crc kubenswrapper[4858]: I0202 17:30:30.501434 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/920ebb7d-135e-4b6b-9f23-4649a33e8896-catalog-content\") pod \"redhat-operators-vpfb7\" (UID: \"920ebb7d-135e-4b6b-9f23-4649a33e8896\") " pod="openshift-marketplace/redhat-operators-vpfb7" Feb 02 17:30:30 crc kubenswrapper[4858]: I0202 17:30:30.501763 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/920ebb7d-135e-4b6b-9f23-4649a33e8896-utilities\") pod \"redhat-operators-vpfb7\" (UID: \"920ebb7d-135e-4b6b-9f23-4649a33e8896\") " pod="openshift-marketplace/redhat-operators-vpfb7" Feb 02 17:30:30 crc kubenswrapper[4858]: I0202 17:30:30.524601 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krfb8\" (UniqueName: \"kubernetes.io/projected/920ebb7d-135e-4b6b-9f23-4649a33e8896-kube-api-access-krfb8\") pod \"redhat-operators-vpfb7\" (UID: \"920ebb7d-135e-4b6b-9f23-4649a33e8896\") " pod="openshift-marketplace/redhat-operators-vpfb7" Feb 02 17:30:30 crc kubenswrapper[4858]: I0202 17:30:30.600048 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vpfb7" Feb 02 17:30:31 crc kubenswrapper[4858]: I0202 17:30:31.052987 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vpfb7"] Feb 02 17:30:31 crc kubenswrapper[4858]: W0202 17:30:31.358529 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod920ebb7d_135e_4b6b_9f23_4649a33e8896.slice/crio-86c4072aeb6c80ed779af3a7f45942c51c958922539212a374b7dfa22805ca43 WatchSource:0}: Error finding container 86c4072aeb6c80ed779af3a7f45942c51c958922539212a374b7dfa22805ca43: Status 404 returned error can't find the container with id 86c4072aeb6c80ed779af3a7f45942c51c958922539212a374b7dfa22805ca43 Feb 02 17:30:32 crc kubenswrapper[4858]: I0202 17:30:32.262339 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"703d6256-20d4-45fc-9a4c-ec6970ea250d","Type":"ContainerStarted","Data":"debc6412b2cb4d3228993fad18a7781c6bb903dbf702ff79bc5657a49a3b296e"} Feb 02 17:30:32 crc kubenswrapper[4858]: I0202 17:30:32.262618 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"703d6256-20d4-45fc-9a4c-ec6970ea250d","Type":"ContainerStarted","Data":"8d46ba82193b16fdbe39ff195877c6f593f1773164f6e478117bd97e24675517"} Feb 02 17:30:32 crc kubenswrapper[4858]: I0202 17:30:32.262629 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"703d6256-20d4-45fc-9a4c-ec6970ea250d","Type":"ContainerStarted","Data":"3a70f21607e76029b11426d79425b440a250fb0c10d23cad2d2131bf73c7de66"} Feb 02 17:30:32 crc kubenswrapper[4858]: I0202 17:30:32.262639 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"703d6256-20d4-45fc-9a4c-ec6970ea250d","Type":"ContainerStarted","Data":"6a287c47310cf393acaf569fc7002a4318933be7b7a807fff85ca41be9e75172"} Feb 02 17:30:32 crc kubenswrapper[4858]: I0202 17:30:32.263847 4858 generic.go:334] "Generic (PLEG): container finished" podID="55d221f1-91f9-4045-b94b-95facb25b3dc" containerID="7aadaa269dc736d732bdf76758c3351ee91ef7f1b2b6fb59c37adeb68100c533" exitCode=0 Feb 02 17:30:32 crc kubenswrapper[4858]: I0202 17:30:32.263892 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"55d221f1-91f9-4045-b94b-95facb25b3dc","Type":"ContainerDied","Data":"7aadaa269dc736d732bdf76758c3351ee91ef7f1b2b6fb59c37adeb68100c533"} Feb 02 17:30:32 crc kubenswrapper[4858]: I0202 17:30:32.266892 4858 generic.go:334] "Generic (PLEG): container finished" podID="920ebb7d-135e-4b6b-9f23-4649a33e8896" containerID="a4b5315f173af3b098819db9b8431e6e2de941df65b5a77a929f46ec2893d3ae" exitCode=0 Feb 02 17:30:32 crc kubenswrapper[4858]: I0202 17:30:32.266989 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpfb7" event={"ID":"920ebb7d-135e-4b6b-9f23-4649a33e8896","Type":"ContainerDied","Data":"a4b5315f173af3b098819db9b8431e6e2de941df65b5a77a929f46ec2893d3ae"} Feb 02 17:30:32 crc kubenswrapper[4858]: I0202 17:30:32.267129 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpfb7" event={"ID":"920ebb7d-135e-4b6b-9f23-4649a33e8896","Type":"ContainerStarted","Data":"86c4072aeb6c80ed779af3a7f45942c51c958922539212a374b7dfa22805ca43"} Feb 02 17:30:33 crc kubenswrapper[4858]: I0202 17:30:33.161182 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-8rcln"] Feb 02 17:30:33 crc kubenswrapper[4858]: I0202 17:30:33.162390 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8rcln" Feb 02 17:30:33 crc kubenswrapper[4858]: I0202 17:30:33.165095 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 02 17:30:33 crc kubenswrapper[4858]: I0202 17:30:33.176534 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-8rcln"] Feb 02 17:30:33 crc kubenswrapper[4858]: I0202 17:30:33.243403 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8bf5ccf-5a03-4178-b13a-1134553abfcb-operator-scripts\") pod \"root-account-create-update-8rcln\" (UID: \"e8bf5ccf-5a03-4178-b13a-1134553abfcb\") " pod="openstack/root-account-create-update-8rcln" Feb 02 17:30:33 crc kubenswrapper[4858]: I0202 17:30:33.243510 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-427pf\" (UniqueName: \"kubernetes.io/projected/e8bf5ccf-5a03-4178-b13a-1134553abfcb-kube-api-access-427pf\") pod \"root-account-create-update-8rcln\" (UID: \"e8bf5ccf-5a03-4178-b13a-1134553abfcb\") " pod="openstack/root-account-create-update-8rcln" Feb 02 17:30:33 crc kubenswrapper[4858]: I0202 17:30:33.275607 4858 generic.go:334] "Generic (PLEG): container finished" podID="0bbd4c99-7b4c-4bb5-833d-aa638125ad9e" containerID="ac808642e58db1d8622822bc03c9ea33e8538cb4f029c9db6a85997ead44db3c" exitCode=0 Feb 02 17:30:33 crc kubenswrapper[4858]: I0202 17:30:33.275655 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e","Type":"ContainerDied","Data":"ac808642e58db1d8622822bc03c9ea33e8538cb4f029c9db6a85997ead44db3c"} Feb 02 17:30:33 crc kubenswrapper[4858]: I0202 17:30:33.345181 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-427pf\" (UniqueName: \"kubernetes.io/projected/e8bf5ccf-5a03-4178-b13a-1134553abfcb-kube-api-access-427pf\") pod \"root-account-create-update-8rcln\" (UID: \"e8bf5ccf-5a03-4178-b13a-1134553abfcb\") " pod="openstack/root-account-create-update-8rcln" Feb 02 17:30:33 crc kubenswrapper[4858]: I0202 17:30:33.345381 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8bf5ccf-5a03-4178-b13a-1134553abfcb-operator-scripts\") pod \"root-account-create-update-8rcln\" (UID: \"e8bf5ccf-5a03-4178-b13a-1134553abfcb\") " pod="openstack/root-account-create-update-8rcln" Feb 02 17:30:33 crc kubenswrapper[4858]: I0202 17:30:33.345915 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8bf5ccf-5a03-4178-b13a-1134553abfcb-operator-scripts\") pod \"root-account-create-update-8rcln\" (UID: \"e8bf5ccf-5a03-4178-b13a-1134553abfcb\") " pod="openstack/root-account-create-update-8rcln" Feb 02 17:30:33 crc kubenswrapper[4858]: I0202 17:30:33.382032 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-427pf\" (UniqueName: \"kubernetes.io/projected/e8bf5ccf-5a03-4178-b13a-1134553abfcb-kube-api-access-427pf\") pod \"root-account-create-update-8rcln\" (UID: \"e8bf5ccf-5a03-4178-b13a-1134553abfcb\") " pod="openstack/root-account-create-update-8rcln" Feb 02 17:30:33 crc kubenswrapper[4858]: I0202 17:30:33.486643 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8rcln" Feb 02 17:30:35 crc kubenswrapper[4858]: I0202 17:30:35.198335 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-tc4gv" Feb 02 17:30:35 crc kubenswrapper[4858]: I0202 17:30:35.201753 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-tc4gv" Feb 02 17:30:35 crc kubenswrapper[4858]: I0202 17:30:35.216787 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-h6kmt" podUID="334dab9b-9793-4424-9c39-27eac5f07626" containerName="ovn-controller" probeResult="failure" output=< Feb 02 17:30:35 crc kubenswrapper[4858]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 02 17:30:35 crc kubenswrapper[4858]: > Feb 02 17:30:35 crc kubenswrapper[4858]: I0202 17:30:35.438242 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-h6kmt-config-lr2gl"] Feb 02 17:30:35 crc kubenswrapper[4858]: I0202 17:30:35.439245 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-h6kmt-config-lr2gl" Feb 02 17:30:35 crc kubenswrapper[4858]: I0202 17:30:35.441476 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 02 17:30:35 crc kubenswrapper[4858]: I0202 17:30:35.453998 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-h6kmt-config-lr2gl"] Feb 02 17:30:35 crc kubenswrapper[4858]: I0202 17:30:35.478511 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9e86c0e0-2ce6-49ad-92fd-d086313059dc-var-log-ovn\") pod \"ovn-controller-h6kmt-config-lr2gl\" (UID: \"9e86c0e0-2ce6-49ad-92fd-d086313059dc\") " pod="openstack/ovn-controller-h6kmt-config-lr2gl" Feb 02 17:30:35 crc kubenswrapper[4858]: I0202 17:30:35.478577 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9e86c0e0-2ce6-49ad-92fd-d086313059dc-var-run\") pod \"ovn-controller-h6kmt-config-lr2gl\" (UID: \"9e86c0e0-2ce6-49ad-92fd-d086313059dc\") " pod="openstack/ovn-controller-h6kmt-config-lr2gl" Feb 02 17:30:35 crc kubenswrapper[4858]: I0202 17:30:35.478621 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e86c0e0-2ce6-49ad-92fd-d086313059dc-scripts\") pod \"ovn-controller-h6kmt-config-lr2gl\" (UID: \"9e86c0e0-2ce6-49ad-92fd-d086313059dc\") " pod="openstack/ovn-controller-h6kmt-config-lr2gl" Feb 02 17:30:35 crc kubenswrapper[4858]: I0202 17:30:35.478653 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zl68\" (UniqueName: \"kubernetes.io/projected/9e86c0e0-2ce6-49ad-92fd-d086313059dc-kube-api-access-2zl68\") pod \"ovn-controller-h6kmt-config-lr2gl\" (UID: \"9e86c0e0-2ce6-49ad-92fd-d086313059dc\") " pod="openstack/ovn-controller-h6kmt-config-lr2gl" Feb 02 17:30:35 crc kubenswrapper[4858]: I0202 17:30:35.478944 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9e86c0e0-2ce6-49ad-92fd-d086313059dc-var-run-ovn\") pod \"ovn-controller-h6kmt-config-lr2gl\" (UID: \"9e86c0e0-2ce6-49ad-92fd-d086313059dc\") " pod="openstack/ovn-controller-h6kmt-config-lr2gl" Feb 02 17:30:35 crc kubenswrapper[4858]: I0202 17:30:35.479189 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9e86c0e0-2ce6-49ad-92fd-d086313059dc-additional-scripts\") pod \"ovn-controller-h6kmt-config-lr2gl\" (UID: \"9e86c0e0-2ce6-49ad-92fd-d086313059dc\") " pod="openstack/ovn-controller-h6kmt-config-lr2gl" Feb 02 17:30:35 crc kubenswrapper[4858]: I0202 17:30:35.581244 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e86c0e0-2ce6-49ad-92fd-d086313059dc-scripts\") pod \"ovn-controller-h6kmt-config-lr2gl\" (UID: \"9e86c0e0-2ce6-49ad-92fd-d086313059dc\") " pod="openstack/ovn-controller-h6kmt-config-lr2gl" Feb 02 17:30:35 crc kubenswrapper[4858]: I0202 17:30:35.581297 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zl68\" (UniqueName: \"kubernetes.io/projected/9e86c0e0-2ce6-49ad-92fd-d086313059dc-kube-api-access-2zl68\") pod \"ovn-controller-h6kmt-config-lr2gl\" (UID: \"9e86c0e0-2ce6-49ad-92fd-d086313059dc\") " pod="openstack/ovn-controller-h6kmt-config-lr2gl" Feb 02 17:30:35 crc kubenswrapper[4858]: I0202 17:30:35.581343 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9e86c0e0-2ce6-49ad-92fd-d086313059dc-var-run-ovn\") pod \"ovn-controller-h6kmt-config-lr2gl\" (UID: \"9e86c0e0-2ce6-49ad-92fd-d086313059dc\") " pod="openstack/ovn-controller-h6kmt-config-lr2gl" Feb 02 17:30:35 crc kubenswrapper[4858]: I0202 17:30:35.581384 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9e86c0e0-2ce6-49ad-92fd-d086313059dc-additional-scripts\") pod \"ovn-controller-h6kmt-config-lr2gl\" (UID: \"9e86c0e0-2ce6-49ad-92fd-d086313059dc\") " pod="openstack/ovn-controller-h6kmt-config-lr2gl" Feb 02 17:30:35 crc kubenswrapper[4858]: I0202 17:30:35.581421 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9e86c0e0-2ce6-49ad-92fd-d086313059dc-var-log-ovn\") pod \"ovn-controller-h6kmt-config-lr2gl\" (UID: \"9e86c0e0-2ce6-49ad-92fd-d086313059dc\") " pod="openstack/ovn-controller-h6kmt-config-lr2gl" Feb 02 17:30:35 crc kubenswrapper[4858]: I0202 17:30:35.581447 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9e86c0e0-2ce6-49ad-92fd-d086313059dc-var-run\") pod \"ovn-controller-h6kmt-config-lr2gl\" (UID: \"9e86c0e0-2ce6-49ad-92fd-d086313059dc\") " pod="openstack/ovn-controller-h6kmt-config-lr2gl" Feb 02 17:30:35 crc kubenswrapper[4858]: I0202 17:30:35.581730 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9e86c0e0-2ce6-49ad-92fd-d086313059dc-var-run\") pod \"ovn-controller-h6kmt-config-lr2gl\" (UID: \"9e86c0e0-2ce6-49ad-92fd-d086313059dc\") " pod="openstack/ovn-controller-h6kmt-config-lr2gl" Feb 02 17:30:35 crc kubenswrapper[4858]: I0202 17:30:35.581774 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9e86c0e0-2ce6-49ad-92fd-d086313059dc-var-log-ovn\") pod \"ovn-controller-h6kmt-config-lr2gl\" (UID: \"9e86c0e0-2ce6-49ad-92fd-d086313059dc\") " pod="openstack/ovn-controller-h6kmt-config-lr2gl" Feb 02 17:30:35 crc kubenswrapper[4858]: I0202 17:30:35.581780 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9e86c0e0-2ce6-49ad-92fd-d086313059dc-var-run-ovn\") pod \"ovn-controller-h6kmt-config-lr2gl\" (UID: \"9e86c0e0-2ce6-49ad-92fd-d086313059dc\") " pod="openstack/ovn-controller-h6kmt-config-lr2gl" Feb 02 17:30:35 crc kubenswrapper[4858]: I0202 17:30:35.582289 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9e86c0e0-2ce6-49ad-92fd-d086313059dc-additional-scripts\") pod \"ovn-controller-h6kmt-config-lr2gl\" (UID: \"9e86c0e0-2ce6-49ad-92fd-d086313059dc\") " pod="openstack/ovn-controller-h6kmt-config-lr2gl" Feb 02 17:30:35 crc kubenswrapper[4858]: I0202 17:30:35.583472 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e86c0e0-2ce6-49ad-92fd-d086313059dc-scripts\") pod \"ovn-controller-h6kmt-config-lr2gl\" (UID: \"9e86c0e0-2ce6-49ad-92fd-d086313059dc\") " pod="openstack/ovn-controller-h6kmt-config-lr2gl" Feb 02 17:30:35 crc kubenswrapper[4858]: I0202 17:30:35.600995 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zl68\" (UniqueName: \"kubernetes.io/projected/9e86c0e0-2ce6-49ad-92fd-d086313059dc-kube-api-access-2zl68\") pod \"ovn-controller-h6kmt-config-lr2gl\" (UID: \"9e86c0e0-2ce6-49ad-92fd-d086313059dc\") " pod="openstack/ovn-controller-h6kmt-config-lr2gl" Feb 02 17:30:35 crc kubenswrapper[4858]: I0202 17:30:35.764821 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-h6kmt-config-lr2gl" Feb 02 17:30:39 crc kubenswrapper[4858]: I0202 17:30:39.232939 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-h6kmt-config-lr2gl"] Feb 02 17:30:39 crc kubenswrapper[4858]: I0202 17:30:39.255597 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-8rcln"] Feb 02 17:30:39 crc kubenswrapper[4858]: I0202 17:30:39.328737 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"55d221f1-91f9-4045-b94b-95facb25b3dc","Type":"ContainerStarted","Data":"c0b46dd7a0e07204197d3abee37af0a73dbe003a0faade6651a8184776b0b279"} Feb 02 17:30:39 crc kubenswrapper[4858]: I0202 17:30:39.328927 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 02 17:30:39 crc kubenswrapper[4858]: I0202 17:30:39.333909 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e","Type":"ContainerStarted","Data":"cb7403de122bf4f3502af410af10c251740d4ed4c1edde9c052461140e977e05"} Feb 02 17:30:39 crc kubenswrapper[4858]: I0202 17:30:39.334147 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:30:39 crc kubenswrapper[4858]: I0202 17:30:39.362390 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=57.04789281 podStartE2EDuration="1m4.362373135s" podCreationTimestamp="2026-02-02 17:29:35 +0000 UTC" firstStartedPulling="2026-02-02 17:29:51.302120294 +0000 UTC m=+892.454535559" lastFinishedPulling="2026-02-02 17:29:58.616600579 +0000 UTC m=+899.769015884" observedRunningTime="2026-02-02 17:30:39.355747328 +0000 UTC m=+940.508162853" watchObservedRunningTime="2026-02-02 17:30:39.362373135 +0000 UTC m=+940.514788390" Feb 02 17:30:39 crc kubenswrapper[4858]: I0202 17:30:39.396276 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=56.828442054 podStartE2EDuration="1m4.39625452s" podCreationTimestamp="2026-02-02 17:29:35 +0000 UTC" firstStartedPulling="2026-02-02 17:29:50.785637934 +0000 UTC m=+891.938053239" lastFinishedPulling="2026-02-02 17:29:58.35345044 +0000 UTC m=+899.505865705" observedRunningTime="2026-02-02 17:30:39.390273011 +0000 UTC m=+940.542688276" watchObservedRunningTime="2026-02-02 17:30:39.39625452 +0000 UTC m=+940.548669785" Feb 02 17:30:39 crc kubenswrapper[4858]: W0202 17:30:39.430178 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8bf5ccf_5a03_4178_b13a_1134553abfcb.slice/crio-d7ce4aaa4921396b98fb813996d5384df86ca8085a113701f45a04dcc491811b WatchSource:0}: Error finding container d7ce4aaa4921396b98fb813996d5384df86ca8085a113701f45a04dcc491811b: Status 404 returned error can't find the container with id d7ce4aaa4921396b98fb813996d5384df86ca8085a113701f45a04dcc491811b Feb 02 17:30:39 crc kubenswrapper[4858]: W0202 17:30:39.431636 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e86c0e0_2ce6_49ad_92fd_d086313059dc.slice/crio-f5f599bc0d6e5a63682a667225f7f897fcd0ab9bc812f8fe6064f5523e52081b WatchSource:0}: Error finding container f5f599bc0d6e5a63682a667225f7f897fcd0ab9bc812f8fe6064f5523e52081b: Status 404 returned error can't find the container with id f5f599bc0d6e5a63682a667225f7f897fcd0ab9bc812f8fe6064f5523e52081b Feb 02 17:30:40 crc kubenswrapper[4858]: I0202 17:30:40.222742 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-h6kmt" Feb 02 17:30:40 crc kubenswrapper[4858]: I0202 17:30:40.342679 4858 generic.go:334] "Generic (PLEG): container finished" podID="e8bf5ccf-5a03-4178-b13a-1134553abfcb" containerID="dbae0a01ba724a0aa7dfcd6eccc8610d11cb6642d9ecfd5b3c127c88fa17ccd1" exitCode=0 Feb 02 17:30:40 crc kubenswrapper[4858]: I0202 17:30:40.342766 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8rcln" event={"ID":"e8bf5ccf-5a03-4178-b13a-1134553abfcb","Type":"ContainerDied","Data":"dbae0a01ba724a0aa7dfcd6eccc8610d11cb6642d9ecfd5b3c127c88fa17ccd1"} Feb 02 17:30:40 crc kubenswrapper[4858]: I0202 17:30:40.342820 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8rcln" event={"ID":"e8bf5ccf-5a03-4178-b13a-1134553abfcb","Type":"ContainerStarted","Data":"d7ce4aaa4921396b98fb813996d5384df86ca8085a113701f45a04dcc491811b"} Feb 02 17:30:40 crc kubenswrapper[4858]: I0202 17:30:40.345541 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-cv9mb" event={"ID":"5cb04b10-2484-4c41-903c-d12ea9ab3600","Type":"ContainerStarted","Data":"a7c6251270468a60f8437db0f238d31075f1433aaa75c717687708698d74c638"} Feb 02 17:30:40 crc kubenswrapper[4858]: I0202 17:30:40.349056 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpfb7" event={"ID":"920ebb7d-135e-4b6b-9f23-4649a33e8896","Type":"ContainerStarted","Data":"c851d14184ee121b784ba7af2bcdb41985e72ead5d58397597e3baa093b586a6"} Feb 02 17:30:40 crc kubenswrapper[4858]: I0202 17:30:40.351505 4858 generic.go:334] "Generic (PLEG): container finished" podID="9e86c0e0-2ce6-49ad-92fd-d086313059dc" containerID="8a893e64e653d2a20cf68f5b07159010b5cc01b61a63ee2621691d468a67a7fd" exitCode=0 Feb 02 17:30:40 crc kubenswrapper[4858]: I0202 17:30:40.351556 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-h6kmt-config-lr2gl" event={"ID":"9e86c0e0-2ce6-49ad-92fd-d086313059dc","Type":"ContainerDied","Data":"8a893e64e653d2a20cf68f5b07159010b5cc01b61a63ee2621691d468a67a7fd"} Feb 02 17:30:40 crc kubenswrapper[4858]: I0202 17:30:40.351580 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-h6kmt-config-lr2gl" event={"ID":"9e86c0e0-2ce6-49ad-92fd-d086313059dc","Type":"ContainerStarted","Data":"f5f599bc0d6e5a63682a667225f7f897fcd0ab9bc812f8fe6064f5523e52081b"} Feb 02 17:30:40 crc kubenswrapper[4858]: I0202 17:30:40.373685 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"703d6256-20d4-45fc-9a4c-ec6970ea250d","Type":"ContainerStarted","Data":"c306fa89aabf892710d5436a78b377c6e7870d8407b5a7bf8c4939348af8acf9"} Feb 02 17:30:40 crc kubenswrapper[4858]: I0202 17:30:40.373719 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"703d6256-20d4-45fc-9a4c-ec6970ea250d","Type":"ContainerStarted","Data":"38354951f7021c0e8bae2e06a7b5f1972c22507095252b74c60c9eb5241bc1a5"} Feb 02 17:30:40 crc kubenswrapper[4858]: I0202 17:30:40.373747 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"703d6256-20d4-45fc-9a4c-ec6970ea250d","Type":"ContainerStarted","Data":"12220afb39424b74efd8fa50b3f8b1e767a0cce59bc6687781ad6cb745fb694e"} Feb 02 17:30:40 crc kubenswrapper[4858]: I0202 17:30:40.385459 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-cv9mb" podStartSLOduration=2.958642023 podStartE2EDuration="15.385437358s" podCreationTimestamp="2026-02-02 17:30:25 +0000 UTC" firstStartedPulling="2026-02-02 17:30:26.378304178 +0000 UTC m=+927.530719443" lastFinishedPulling="2026-02-02 17:30:38.805099493 +0000 UTC m=+939.957514778" observedRunningTime="2026-02-02 17:30:40.382193457 +0000 UTC m=+941.534608722" watchObservedRunningTime="2026-02-02 17:30:40.385437358 +0000 UTC m=+941.537852623" Feb 02 17:30:40 crc kubenswrapper[4858]: I0202 17:30:40.794873 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z9n2d"] Feb 02 17:30:40 crc kubenswrapper[4858]: I0202 17:30:40.797358 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z9n2d" Feb 02 17:30:40 crc kubenswrapper[4858]: I0202 17:30:40.806475 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z9n2d"] Feb 02 17:30:40 crc kubenswrapper[4858]: I0202 17:30:40.872661 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/193d2b9e-dc31-4d42-971b-ff706ff40bb1-utilities\") pod \"community-operators-z9n2d\" (UID: \"193d2b9e-dc31-4d42-971b-ff706ff40bb1\") " pod="openshift-marketplace/community-operators-z9n2d" Feb 02 17:30:40 crc kubenswrapper[4858]: I0202 17:30:40.872764 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48h2d\" (UniqueName: \"kubernetes.io/projected/193d2b9e-dc31-4d42-971b-ff706ff40bb1-kube-api-access-48h2d\") pod \"community-operators-z9n2d\" (UID: \"193d2b9e-dc31-4d42-971b-ff706ff40bb1\") " pod="openshift-marketplace/community-operators-z9n2d" Feb 02 17:30:40 crc kubenswrapper[4858]: I0202 17:30:40.872828 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/193d2b9e-dc31-4d42-971b-ff706ff40bb1-catalog-content\") pod \"community-operators-z9n2d\" (UID: \"193d2b9e-dc31-4d42-971b-ff706ff40bb1\") " pod="openshift-marketplace/community-operators-z9n2d" Feb 02 17:30:40 crc kubenswrapper[4858]: I0202 17:30:40.974234 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48h2d\" (UniqueName: \"kubernetes.io/projected/193d2b9e-dc31-4d42-971b-ff706ff40bb1-kube-api-access-48h2d\") pod \"community-operators-z9n2d\" (UID: \"193d2b9e-dc31-4d42-971b-ff706ff40bb1\") " pod="openshift-marketplace/community-operators-z9n2d" Feb 02 17:30:40 crc kubenswrapper[4858]: I0202 17:30:40.974289 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/193d2b9e-dc31-4d42-971b-ff706ff40bb1-catalog-content\") pod \"community-operators-z9n2d\" (UID: \"193d2b9e-dc31-4d42-971b-ff706ff40bb1\") " pod="openshift-marketplace/community-operators-z9n2d" Feb 02 17:30:40 crc kubenswrapper[4858]: I0202 17:30:40.974358 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/193d2b9e-dc31-4d42-971b-ff706ff40bb1-utilities\") pod \"community-operators-z9n2d\" (UID: \"193d2b9e-dc31-4d42-971b-ff706ff40bb1\") " pod="openshift-marketplace/community-operators-z9n2d" Feb 02 17:30:40 crc kubenswrapper[4858]: I0202 17:30:40.974810 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/193d2b9e-dc31-4d42-971b-ff706ff40bb1-utilities\") pod \"community-operators-z9n2d\" (UID: \"193d2b9e-dc31-4d42-971b-ff706ff40bb1\") " pod="openshift-marketplace/community-operators-z9n2d" Feb 02 17:30:40 crc kubenswrapper[4858]: I0202 17:30:40.974936 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/193d2b9e-dc31-4d42-971b-ff706ff40bb1-catalog-content\") pod \"community-operators-z9n2d\" (UID: \"193d2b9e-dc31-4d42-971b-ff706ff40bb1\") " pod="openshift-marketplace/community-operators-z9n2d" Feb 02 17:30:40 crc kubenswrapper[4858]: I0202 17:30:40.992669 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48h2d\" (UniqueName: \"kubernetes.io/projected/193d2b9e-dc31-4d42-971b-ff706ff40bb1-kube-api-access-48h2d\") pod \"community-operators-z9n2d\" (UID: \"193d2b9e-dc31-4d42-971b-ff706ff40bb1\") " pod="openshift-marketplace/community-operators-z9n2d" Feb 02 17:30:41 crc kubenswrapper[4858]: I0202 17:30:41.123454 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z9n2d" Feb 02 17:30:41 crc kubenswrapper[4858]: I0202 17:30:41.395823 4858 generic.go:334] "Generic (PLEG): container finished" podID="920ebb7d-135e-4b6b-9f23-4649a33e8896" containerID="c851d14184ee121b784ba7af2bcdb41985e72ead5d58397597e3baa093b586a6" exitCode=0 Feb 02 17:30:41 crc kubenswrapper[4858]: I0202 17:30:41.395917 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpfb7" event={"ID":"920ebb7d-135e-4b6b-9f23-4649a33e8896","Type":"ContainerDied","Data":"c851d14184ee121b784ba7af2bcdb41985e72ead5d58397597e3baa093b586a6"} Feb 02 17:30:41 crc kubenswrapper[4858]: I0202 17:30:41.414042 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"703d6256-20d4-45fc-9a4c-ec6970ea250d","Type":"ContainerStarted","Data":"acd76356e929bb7f72e7dcc599b9baac23a3850099241f4a5b15fdcf817ee3d9"} Feb 02 17:30:41 crc kubenswrapper[4858]: I0202 17:30:41.446392 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z9n2d"] Feb 02 17:30:42 crc kubenswrapper[4858]: I0202 17:30:42.250715 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-h6kmt-config-lr2gl" Feb 02 17:30:42 crc kubenswrapper[4858]: I0202 17:30:42.264752 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8rcln" Feb 02 17:30:42 crc kubenswrapper[4858]: I0202 17:30:42.414554 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-427pf\" (UniqueName: \"kubernetes.io/projected/e8bf5ccf-5a03-4178-b13a-1134553abfcb-kube-api-access-427pf\") pod \"e8bf5ccf-5a03-4178-b13a-1134553abfcb\" (UID: \"e8bf5ccf-5a03-4178-b13a-1134553abfcb\") " Feb 02 17:30:42 crc kubenswrapper[4858]: I0202 17:30:42.414880 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9e86c0e0-2ce6-49ad-92fd-d086313059dc-additional-scripts\") pod \"9e86c0e0-2ce6-49ad-92fd-d086313059dc\" (UID: \"9e86c0e0-2ce6-49ad-92fd-d086313059dc\") " Feb 02 17:30:42 crc kubenswrapper[4858]: I0202 17:30:42.414906 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8bf5ccf-5a03-4178-b13a-1134553abfcb-operator-scripts\") pod \"e8bf5ccf-5a03-4178-b13a-1134553abfcb\" (UID: \"e8bf5ccf-5a03-4178-b13a-1134553abfcb\") " Feb 02 17:30:42 crc kubenswrapper[4858]: I0202 17:30:42.414959 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9e86c0e0-2ce6-49ad-92fd-d086313059dc-var-log-ovn\") pod \"9e86c0e0-2ce6-49ad-92fd-d086313059dc\" (UID: \"9e86c0e0-2ce6-49ad-92fd-d086313059dc\") " Feb 02 17:30:42 crc kubenswrapper[4858]: I0202 17:30:42.415038 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e86c0e0-2ce6-49ad-92fd-d086313059dc-scripts\") pod \"9e86c0e0-2ce6-49ad-92fd-d086313059dc\" (UID: \"9e86c0e0-2ce6-49ad-92fd-d086313059dc\") " Feb 02 17:30:42 crc kubenswrapper[4858]: I0202 17:30:42.416069 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9e86c0e0-2ce6-49ad-92fd-d086313059dc-var-run\") pod \"9e86c0e0-2ce6-49ad-92fd-d086313059dc\" (UID: \"9e86c0e0-2ce6-49ad-92fd-d086313059dc\") " Feb 02 17:30:42 crc kubenswrapper[4858]: I0202 17:30:42.415082 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e86c0e0-2ce6-49ad-92fd-d086313059dc-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "9e86c0e0-2ce6-49ad-92fd-d086313059dc" (UID: "9e86c0e0-2ce6-49ad-92fd-d086313059dc"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 17:30:42 crc kubenswrapper[4858]: I0202 17:30:42.415614 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8bf5ccf-5a03-4178-b13a-1134553abfcb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e8bf5ccf-5a03-4178-b13a-1134553abfcb" (UID: "e8bf5ccf-5a03-4178-b13a-1134553abfcb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:30:42 crc kubenswrapper[4858]: I0202 17:30:42.415748 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e86c0e0-2ce6-49ad-92fd-d086313059dc-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "9e86c0e0-2ce6-49ad-92fd-d086313059dc" (UID: "9e86c0e0-2ce6-49ad-92fd-d086313059dc"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:30:42 crc kubenswrapper[4858]: I0202 17:30:42.415991 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e86c0e0-2ce6-49ad-92fd-d086313059dc-scripts" (OuterVolumeSpecName: "scripts") pod "9e86c0e0-2ce6-49ad-92fd-d086313059dc" (UID: "9e86c0e0-2ce6-49ad-92fd-d086313059dc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:30:42 crc kubenswrapper[4858]: I0202 17:30:42.416123 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e86c0e0-2ce6-49ad-92fd-d086313059dc-var-run" (OuterVolumeSpecName: "var-run") pod "9e86c0e0-2ce6-49ad-92fd-d086313059dc" (UID: "9e86c0e0-2ce6-49ad-92fd-d086313059dc"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 17:30:42 crc kubenswrapper[4858]: I0202 17:30:42.416322 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9e86c0e0-2ce6-49ad-92fd-d086313059dc-var-run-ovn\") pod \"9e86c0e0-2ce6-49ad-92fd-d086313059dc\" (UID: \"9e86c0e0-2ce6-49ad-92fd-d086313059dc\") " Feb 02 17:30:42 crc kubenswrapper[4858]: I0202 17:30:42.416393 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e86c0e0-2ce6-49ad-92fd-d086313059dc-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "9e86c0e0-2ce6-49ad-92fd-d086313059dc" (UID: "9e86c0e0-2ce6-49ad-92fd-d086313059dc"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 17:30:42 crc kubenswrapper[4858]: I0202 17:30:42.416453 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zl68\" (UniqueName: \"kubernetes.io/projected/9e86c0e0-2ce6-49ad-92fd-d086313059dc-kube-api-access-2zl68\") pod \"9e86c0e0-2ce6-49ad-92fd-d086313059dc\" (UID: \"9e86c0e0-2ce6-49ad-92fd-d086313059dc\") " Feb 02 17:30:42 crc kubenswrapper[4858]: I0202 17:30:42.418167 4858 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9e86c0e0-2ce6-49ad-92fd-d086313059dc-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:42 crc kubenswrapper[4858]: I0202 17:30:42.418192 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8bf5ccf-5a03-4178-b13a-1134553abfcb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:42 crc kubenswrapper[4858]: I0202 17:30:42.418206 4858 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9e86c0e0-2ce6-49ad-92fd-d086313059dc-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:42 crc kubenswrapper[4858]: I0202 17:30:42.418218 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e86c0e0-2ce6-49ad-92fd-d086313059dc-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:42 crc kubenswrapper[4858]: I0202 17:30:42.418232 4858 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9e86c0e0-2ce6-49ad-92fd-d086313059dc-var-run\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:42 crc kubenswrapper[4858]: I0202 17:30:42.418268 4858 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9e86c0e0-2ce6-49ad-92fd-d086313059dc-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:42 crc kubenswrapper[4858]: I0202 17:30:42.418499 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8bf5ccf-5a03-4178-b13a-1134553abfcb-kube-api-access-427pf" (OuterVolumeSpecName: "kube-api-access-427pf") pod "e8bf5ccf-5a03-4178-b13a-1134553abfcb" (UID: "e8bf5ccf-5a03-4178-b13a-1134553abfcb"). InnerVolumeSpecName "kube-api-access-427pf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:30:42 crc kubenswrapper[4858]: I0202 17:30:42.425247 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e86c0e0-2ce6-49ad-92fd-d086313059dc-kube-api-access-2zl68" (OuterVolumeSpecName: "kube-api-access-2zl68") pod "9e86c0e0-2ce6-49ad-92fd-d086313059dc" (UID: "9e86c0e0-2ce6-49ad-92fd-d086313059dc"). InnerVolumeSpecName "kube-api-access-2zl68". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:30:42 crc kubenswrapper[4858]: I0202 17:30:42.434319 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-h6kmt-config-lr2gl" event={"ID":"9e86c0e0-2ce6-49ad-92fd-d086313059dc","Type":"ContainerDied","Data":"f5f599bc0d6e5a63682a667225f7f897fcd0ab9bc812f8fe6064f5523e52081b"} Feb 02 17:30:42 crc kubenswrapper[4858]: I0202 17:30:42.434374 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5f599bc0d6e5a63682a667225f7f897fcd0ab9bc812f8fe6064f5523e52081b" Feb 02 17:30:42 crc kubenswrapper[4858]: I0202 17:30:42.434337 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-h6kmt-config-lr2gl" Feb 02 17:30:42 crc kubenswrapper[4858]: I0202 17:30:42.436570 4858 generic.go:334] "Generic (PLEG): container finished" podID="193d2b9e-dc31-4d42-971b-ff706ff40bb1" containerID="44146ec06a4a4f70430533c7b89e8e31584909318a823aaac79939e70ab0357f" exitCode=0 Feb 02 17:30:42 crc kubenswrapper[4858]: I0202 17:30:42.436647 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z9n2d" event={"ID":"193d2b9e-dc31-4d42-971b-ff706ff40bb1","Type":"ContainerDied","Data":"44146ec06a4a4f70430533c7b89e8e31584909318a823aaac79939e70ab0357f"} Feb 02 17:30:42 crc kubenswrapper[4858]: I0202 17:30:42.436681 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z9n2d" event={"ID":"193d2b9e-dc31-4d42-971b-ff706ff40bb1","Type":"ContainerStarted","Data":"c7eef3ee3c84ff95e80920239514951f8549aae18ffa5ffd1131bd346e91457a"} Feb 02 17:30:42 crc kubenswrapper[4858]: I0202 17:30:42.452620 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"703d6256-20d4-45fc-9a4c-ec6970ea250d","Type":"ContainerStarted","Data":"9c3d961df86824582eac5e21fecdd65c7a6933b2de509993fc87124a2b5c7da0"} Feb 02 17:30:42 crc kubenswrapper[4858]: I0202 17:30:42.454833 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8rcln" event={"ID":"e8bf5ccf-5a03-4178-b13a-1134553abfcb","Type":"ContainerDied","Data":"d7ce4aaa4921396b98fb813996d5384df86ca8085a113701f45a04dcc491811b"} Feb 02 17:30:42 crc kubenswrapper[4858]: I0202 17:30:42.454886 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7ce4aaa4921396b98fb813996d5384df86ca8085a113701f45a04dcc491811b" Feb 02 17:30:42 crc kubenswrapper[4858]: I0202 17:30:42.454941 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8rcln" Feb 02 17:30:42 crc kubenswrapper[4858]: I0202 17:30:42.461600 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpfb7" event={"ID":"920ebb7d-135e-4b6b-9f23-4649a33e8896","Type":"ContainerStarted","Data":"068ae0589b0c28ba85e119c5d2ba19080b6ba284ab34e08e6f48f630fc022b29"} Feb 02 17:30:42 crc kubenswrapper[4858]: I0202 17:30:42.491795 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vpfb7" podStartSLOduration=9.056530832 podStartE2EDuration="12.491778731s" podCreationTimestamp="2026-02-02 17:30:30 +0000 UTC" firstStartedPulling="2026-02-02 17:30:38.688168677 +0000 UTC m=+939.840583982" lastFinishedPulling="2026-02-02 17:30:42.123416616 +0000 UTC m=+943.275831881" observedRunningTime="2026-02-02 17:30:42.48569225 +0000 UTC m=+943.638107515" watchObservedRunningTime="2026-02-02 17:30:42.491778731 +0000 UTC m=+943.644193996" Feb 02 17:30:42 crc kubenswrapper[4858]: I0202 17:30:42.520161 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zl68\" (UniqueName: \"kubernetes.io/projected/9e86c0e0-2ce6-49ad-92fd-d086313059dc-kube-api-access-2zl68\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:42 crc kubenswrapper[4858]: I0202 17:30:42.520192 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-427pf\" (UniqueName: \"kubernetes.io/projected/e8bf5ccf-5a03-4178-b13a-1134553abfcb-kube-api-access-427pf\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:43 crc kubenswrapper[4858]: I0202 17:30:43.381167 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-h6kmt-config-lr2gl"] Feb 02 17:30:43 crc kubenswrapper[4858]: I0202 17:30:43.390154 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-h6kmt-config-lr2gl"] Feb 02 17:30:43 crc kubenswrapper[4858]: I0202 17:30:43.473033 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"703d6256-20d4-45fc-9a4c-ec6970ea250d","Type":"ContainerStarted","Data":"207e95039251ba6ac55d4403417cd64a0e83659a7242bb5637a933f1bef260e6"} Feb 02 17:30:43 crc kubenswrapper[4858]: I0202 17:30:43.473074 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"703d6256-20d4-45fc-9a4c-ec6970ea250d","Type":"ContainerStarted","Data":"d913b083d7254142c58e7d276666cc3adee7f058a071180020a9e32ce764c582"} Feb 02 17:30:43 crc kubenswrapper[4858]: I0202 17:30:43.473084 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"703d6256-20d4-45fc-9a4c-ec6970ea250d","Type":"ContainerStarted","Data":"75f42cc0ed2a1f5b1d3d0fb8b95f284893597f2bf0b3e93cfea2bba2d911941c"} Feb 02 17:30:43 crc kubenswrapper[4858]: I0202 17:30:43.505053 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-h6kmt-config-4tcxc"] Feb 02 17:30:43 crc kubenswrapper[4858]: E0202 17:30:43.505375 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e86c0e0-2ce6-49ad-92fd-d086313059dc" containerName="ovn-config" Feb 02 17:30:43 crc kubenswrapper[4858]: I0202 17:30:43.505392 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e86c0e0-2ce6-49ad-92fd-d086313059dc" containerName="ovn-config" Feb 02 17:30:43 crc kubenswrapper[4858]: E0202 17:30:43.505416 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8bf5ccf-5a03-4178-b13a-1134553abfcb" containerName="mariadb-account-create-update" Feb 02 17:30:43 crc kubenswrapper[4858]: I0202 17:30:43.505423 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8bf5ccf-5a03-4178-b13a-1134553abfcb" containerName="mariadb-account-create-update" Feb 02 17:30:43 crc kubenswrapper[4858]: I0202 17:30:43.505594 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e86c0e0-2ce6-49ad-92fd-d086313059dc" containerName="ovn-config" Feb 02 17:30:43 crc kubenswrapper[4858]: I0202 17:30:43.505626 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8bf5ccf-5a03-4178-b13a-1134553abfcb" containerName="mariadb-account-create-update" Feb 02 17:30:43 crc kubenswrapper[4858]: I0202 17:30:43.506113 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-h6kmt-config-4tcxc" Feb 02 17:30:43 crc kubenswrapper[4858]: I0202 17:30:43.508339 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 02 17:30:43 crc kubenswrapper[4858]: I0202 17:30:43.516989 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-h6kmt-config-4tcxc"] Feb 02 17:30:43 crc kubenswrapper[4858]: I0202 17:30:43.539886 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzkh8\" (UniqueName: \"kubernetes.io/projected/2f64d810-ad37-4de4-8a0f-694ea8f03456-kube-api-access-tzkh8\") pod \"ovn-controller-h6kmt-config-4tcxc\" (UID: \"2f64d810-ad37-4de4-8a0f-694ea8f03456\") " pod="openstack/ovn-controller-h6kmt-config-4tcxc" Feb 02 17:30:43 crc kubenswrapper[4858]: I0202 17:30:43.540056 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2f64d810-ad37-4de4-8a0f-694ea8f03456-additional-scripts\") pod \"ovn-controller-h6kmt-config-4tcxc\" (UID: \"2f64d810-ad37-4de4-8a0f-694ea8f03456\") " pod="openstack/ovn-controller-h6kmt-config-4tcxc" Feb 02 17:30:43 crc kubenswrapper[4858]: I0202 17:30:43.540105 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2f64d810-ad37-4de4-8a0f-694ea8f03456-var-run\") pod \"ovn-controller-h6kmt-config-4tcxc\" (UID: \"2f64d810-ad37-4de4-8a0f-694ea8f03456\") " pod="openstack/ovn-controller-h6kmt-config-4tcxc" Feb 02 17:30:43 crc kubenswrapper[4858]: I0202 17:30:43.540137 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2f64d810-ad37-4de4-8a0f-694ea8f03456-var-run-ovn\") pod \"ovn-controller-h6kmt-config-4tcxc\" (UID: \"2f64d810-ad37-4de4-8a0f-694ea8f03456\") " pod="openstack/ovn-controller-h6kmt-config-4tcxc" Feb 02 17:30:43 crc kubenswrapper[4858]: I0202 17:30:43.540162 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2f64d810-ad37-4de4-8a0f-694ea8f03456-var-log-ovn\") pod \"ovn-controller-h6kmt-config-4tcxc\" (UID: \"2f64d810-ad37-4de4-8a0f-694ea8f03456\") " pod="openstack/ovn-controller-h6kmt-config-4tcxc" Feb 02 17:30:43 crc kubenswrapper[4858]: I0202 17:30:43.540218 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f64d810-ad37-4de4-8a0f-694ea8f03456-scripts\") pod \"ovn-controller-h6kmt-config-4tcxc\" (UID: \"2f64d810-ad37-4de4-8a0f-694ea8f03456\") " pod="openstack/ovn-controller-h6kmt-config-4tcxc" Feb 02 17:30:43 crc kubenswrapper[4858]: I0202 17:30:43.640586 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2f64d810-ad37-4de4-8a0f-694ea8f03456-additional-scripts\") pod \"ovn-controller-h6kmt-config-4tcxc\" (UID: \"2f64d810-ad37-4de4-8a0f-694ea8f03456\") " pod="openstack/ovn-controller-h6kmt-config-4tcxc" Feb 02 17:30:43 crc kubenswrapper[4858]: I0202 17:30:43.640642 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2f64d810-ad37-4de4-8a0f-694ea8f03456-var-run\") pod \"ovn-controller-h6kmt-config-4tcxc\" (UID: \"2f64d810-ad37-4de4-8a0f-694ea8f03456\") " pod="openstack/ovn-controller-h6kmt-config-4tcxc" Feb 02 17:30:43 crc kubenswrapper[4858]: I0202 17:30:43.640660 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2f64d810-ad37-4de4-8a0f-694ea8f03456-var-run-ovn\") pod \"ovn-controller-h6kmt-config-4tcxc\" (UID: \"2f64d810-ad37-4de4-8a0f-694ea8f03456\") " pod="openstack/ovn-controller-h6kmt-config-4tcxc" Feb 02 17:30:43 crc kubenswrapper[4858]: I0202 17:30:43.640678 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2f64d810-ad37-4de4-8a0f-694ea8f03456-var-log-ovn\") pod \"ovn-controller-h6kmt-config-4tcxc\" (UID: \"2f64d810-ad37-4de4-8a0f-694ea8f03456\") " pod="openstack/ovn-controller-h6kmt-config-4tcxc" Feb 02 17:30:43 crc kubenswrapper[4858]: I0202 17:30:43.640713 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f64d810-ad37-4de4-8a0f-694ea8f03456-scripts\") pod \"ovn-controller-h6kmt-config-4tcxc\" (UID: \"2f64d810-ad37-4de4-8a0f-694ea8f03456\") " pod="openstack/ovn-controller-h6kmt-config-4tcxc" Feb 02 17:30:43 crc kubenswrapper[4858]: I0202 17:30:43.640737 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzkh8\" (UniqueName: \"kubernetes.io/projected/2f64d810-ad37-4de4-8a0f-694ea8f03456-kube-api-access-tzkh8\") pod \"ovn-controller-h6kmt-config-4tcxc\" (UID: \"2f64d810-ad37-4de4-8a0f-694ea8f03456\") " pod="openstack/ovn-controller-h6kmt-config-4tcxc" Feb 02 17:30:43 crc kubenswrapper[4858]: I0202 17:30:43.641424 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2f64d810-ad37-4de4-8a0f-694ea8f03456-var-log-ovn\") pod \"ovn-controller-h6kmt-config-4tcxc\" (UID: \"2f64d810-ad37-4de4-8a0f-694ea8f03456\") " pod="openstack/ovn-controller-h6kmt-config-4tcxc" Feb 02 17:30:43 crc kubenswrapper[4858]: I0202 17:30:43.641444 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2f64d810-ad37-4de4-8a0f-694ea8f03456-var-run-ovn\") pod \"ovn-controller-h6kmt-config-4tcxc\" (UID: \"2f64d810-ad37-4de4-8a0f-694ea8f03456\") " pod="openstack/ovn-controller-h6kmt-config-4tcxc" Feb 02 17:30:43 crc kubenswrapper[4858]: I0202 17:30:43.641470 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2f64d810-ad37-4de4-8a0f-694ea8f03456-var-run\") pod \"ovn-controller-h6kmt-config-4tcxc\" (UID: \"2f64d810-ad37-4de4-8a0f-694ea8f03456\") " pod="openstack/ovn-controller-h6kmt-config-4tcxc" Feb 02 17:30:43 crc kubenswrapper[4858]: I0202 17:30:43.641718 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2f64d810-ad37-4de4-8a0f-694ea8f03456-additional-scripts\") pod \"ovn-controller-h6kmt-config-4tcxc\" (UID: \"2f64d810-ad37-4de4-8a0f-694ea8f03456\") " pod="openstack/ovn-controller-h6kmt-config-4tcxc" Feb 02 17:30:43 crc kubenswrapper[4858]: I0202 17:30:43.643145 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f64d810-ad37-4de4-8a0f-694ea8f03456-scripts\") pod \"ovn-controller-h6kmt-config-4tcxc\" (UID: \"2f64d810-ad37-4de4-8a0f-694ea8f03456\") " pod="openstack/ovn-controller-h6kmt-config-4tcxc" Feb 02 17:30:43 crc kubenswrapper[4858]: I0202 17:30:43.663480 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzkh8\" (UniqueName: \"kubernetes.io/projected/2f64d810-ad37-4de4-8a0f-694ea8f03456-kube-api-access-tzkh8\") pod \"ovn-controller-h6kmt-config-4tcxc\" (UID: \"2f64d810-ad37-4de4-8a0f-694ea8f03456\") " pod="openstack/ovn-controller-h6kmt-config-4tcxc" Feb 02 17:30:43 crc kubenswrapper[4858]: I0202 17:30:43.820103 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-h6kmt-config-4tcxc" Feb 02 17:30:44 crc kubenswrapper[4858]: I0202 17:30:44.413309 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e86c0e0-2ce6-49ad-92fd-d086313059dc" path="/var/lib/kubelet/pods/9e86c0e0-2ce6-49ad-92fd-d086313059dc/volumes" Feb 02 17:30:45 crc kubenswrapper[4858]: I0202 17:30:45.492938 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"703d6256-20d4-45fc-9a4c-ec6970ea250d","Type":"ContainerStarted","Data":"aac9a1bc9bdd6f67f10a01cb73410b9a381513b53ca52a437e1d9d46e8b72444"} Feb 02 17:30:46 crc kubenswrapper[4858]: I0202 17:30:46.259737 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pmntb"] Feb 02 17:30:46 crc kubenswrapper[4858]: I0202 17:30:46.262675 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pmntb" Feb 02 17:30:46 crc kubenswrapper[4858]: I0202 17:30:46.287650 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pmntb"] Feb 02 17:30:46 crc kubenswrapper[4858]: I0202 17:30:46.417960 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr2sw\" (UniqueName: \"kubernetes.io/projected/235550bc-ede9-4da9-a2d0-08253c0d3a29-kube-api-access-cr2sw\") pod \"redhat-marketplace-pmntb\" (UID: \"235550bc-ede9-4da9-a2d0-08253c0d3a29\") " pod="openshift-marketplace/redhat-marketplace-pmntb" Feb 02 17:30:46 crc kubenswrapper[4858]: I0202 17:30:46.418232 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/235550bc-ede9-4da9-a2d0-08253c0d3a29-catalog-content\") pod \"redhat-marketplace-pmntb\" (UID: \"235550bc-ede9-4da9-a2d0-08253c0d3a29\") " pod="openshift-marketplace/redhat-marketplace-pmntb" Feb 02 17:30:46 crc kubenswrapper[4858]: I0202 17:30:46.418402 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/235550bc-ede9-4da9-a2d0-08253c0d3a29-utilities\") pod \"redhat-marketplace-pmntb\" (UID: \"235550bc-ede9-4da9-a2d0-08253c0d3a29\") " pod="openshift-marketplace/redhat-marketplace-pmntb" Feb 02 17:30:46 crc kubenswrapper[4858]: I0202 17:30:46.519724 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/235550bc-ede9-4da9-a2d0-08253c0d3a29-catalog-content\") pod \"redhat-marketplace-pmntb\" (UID: \"235550bc-ede9-4da9-a2d0-08253c0d3a29\") " pod="openshift-marketplace/redhat-marketplace-pmntb" Feb 02 17:30:46 crc kubenswrapper[4858]: I0202 17:30:46.519782 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/235550bc-ede9-4da9-a2d0-08253c0d3a29-utilities\") pod \"redhat-marketplace-pmntb\" (UID: \"235550bc-ede9-4da9-a2d0-08253c0d3a29\") " pod="openshift-marketplace/redhat-marketplace-pmntb" Feb 02 17:30:46 crc kubenswrapper[4858]: I0202 17:30:46.519847 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr2sw\" (UniqueName: \"kubernetes.io/projected/235550bc-ede9-4da9-a2d0-08253c0d3a29-kube-api-access-cr2sw\") pod \"redhat-marketplace-pmntb\" (UID: \"235550bc-ede9-4da9-a2d0-08253c0d3a29\") " pod="openshift-marketplace/redhat-marketplace-pmntb" Feb 02 17:30:46 crc kubenswrapper[4858]: I0202 17:30:46.520462 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/235550bc-ede9-4da9-a2d0-08253c0d3a29-catalog-content\") pod \"redhat-marketplace-pmntb\" (UID: \"235550bc-ede9-4da9-a2d0-08253c0d3a29\") " pod="openshift-marketplace/redhat-marketplace-pmntb" Feb 02 17:30:46 crc kubenswrapper[4858]: I0202 17:30:46.520530 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/235550bc-ede9-4da9-a2d0-08253c0d3a29-utilities\") pod \"redhat-marketplace-pmntb\" (UID: \"235550bc-ede9-4da9-a2d0-08253c0d3a29\") " pod="openshift-marketplace/redhat-marketplace-pmntb" Feb 02 17:30:46 crc kubenswrapper[4858]: I0202 17:30:46.541382 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr2sw\" (UniqueName: \"kubernetes.io/projected/235550bc-ede9-4da9-a2d0-08253c0d3a29-kube-api-access-cr2sw\") pod \"redhat-marketplace-pmntb\" (UID: \"235550bc-ede9-4da9-a2d0-08253c0d3a29\") " pod="openshift-marketplace/redhat-marketplace-pmntb" Feb 02 17:30:46 crc kubenswrapper[4858]: I0202 17:30:46.594318 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pmntb" Feb 02 17:30:48 crc kubenswrapper[4858]: I0202 17:30:48.656053 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b2dtp"] Feb 02 17:30:48 crc kubenswrapper[4858]: I0202 17:30:48.662139 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b2dtp" Feb 02 17:30:48 crc kubenswrapper[4858]: I0202 17:30:48.677841 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b2dtp"] Feb 02 17:30:48 crc kubenswrapper[4858]: I0202 17:30:48.861751 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0e14259-bf4e-47d1-952c-c17076756fd5-catalog-content\") pod \"certified-operators-b2dtp\" (UID: \"d0e14259-bf4e-47d1-952c-c17076756fd5\") " pod="openshift-marketplace/certified-operators-b2dtp" Feb 02 17:30:48 crc kubenswrapper[4858]: I0202 17:30:48.862020 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0e14259-bf4e-47d1-952c-c17076756fd5-utilities\") pod \"certified-operators-b2dtp\" (UID: \"d0e14259-bf4e-47d1-952c-c17076756fd5\") " pod="openshift-marketplace/certified-operators-b2dtp" Feb 02 17:30:48 crc kubenswrapper[4858]: I0202 17:30:48.862227 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzc5x\" (UniqueName: \"kubernetes.io/projected/d0e14259-bf4e-47d1-952c-c17076756fd5-kube-api-access-lzc5x\") pod \"certified-operators-b2dtp\" (UID: \"d0e14259-bf4e-47d1-952c-c17076756fd5\") " pod="openshift-marketplace/certified-operators-b2dtp" Feb 02 17:30:48 crc kubenswrapper[4858]: I0202 17:30:48.964308 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0e14259-bf4e-47d1-952c-c17076756fd5-utilities\") pod \"certified-operators-b2dtp\" (UID: \"d0e14259-bf4e-47d1-952c-c17076756fd5\") " pod="openshift-marketplace/certified-operators-b2dtp" Feb 02 17:30:48 crc kubenswrapper[4858]: I0202 17:30:48.964403 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzc5x\" (UniqueName: \"kubernetes.io/projected/d0e14259-bf4e-47d1-952c-c17076756fd5-kube-api-access-lzc5x\") pod \"certified-operators-b2dtp\" (UID: \"d0e14259-bf4e-47d1-952c-c17076756fd5\") " pod="openshift-marketplace/certified-operators-b2dtp" Feb 02 17:30:48 crc kubenswrapper[4858]: I0202 17:30:48.964480 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0e14259-bf4e-47d1-952c-c17076756fd5-catalog-content\") pod \"certified-operators-b2dtp\" (UID: \"d0e14259-bf4e-47d1-952c-c17076756fd5\") " pod="openshift-marketplace/certified-operators-b2dtp" Feb 02 17:30:48 crc kubenswrapper[4858]: I0202 17:30:48.964918 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0e14259-bf4e-47d1-952c-c17076756fd5-utilities\") pod \"certified-operators-b2dtp\" (UID: \"d0e14259-bf4e-47d1-952c-c17076756fd5\") " pod="openshift-marketplace/certified-operators-b2dtp" Feb 02 17:30:48 crc kubenswrapper[4858]: I0202 17:30:48.964939 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0e14259-bf4e-47d1-952c-c17076756fd5-catalog-content\") pod \"certified-operators-b2dtp\" (UID: \"d0e14259-bf4e-47d1-952c-c17076756fd5\") " pod="openshift-marketplace/certified-operators-b2dtp" Feb 02 17:30:48 crc kubenswrapper[4858]: I0202 17:30:48.995155 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzc5x\" (UniqueName: \"kubernetes.io/projected/d0e14259-bf4e-47d1-952c-c17076756fd5-kube-api-access-lzc5x\") pod \"certified-operators-b2dtp\" (UID: \"d0e14259-bf4e-47d1-952c-c17076756fd5\") " pod="openshift-marketplace/certified-operators-b2dtp" Feb 02 17:30:49 crc kubenswrapper[4858]: I0202 17:30:49.291950 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b2dtp" Feb 02 17:30:49 crc kubenswrapper[4858]: I0202 17:30:49.550946 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z9n2d" event={"ID":"193d2b9e-dc31-4d42-971b-ff706ff40bb1","Type":"ContainerStarted","Data":"2b15d1c09c0ca2431258629272947923336475ad51087a5ddcfeac415e6c87ce"} Feb 02 17:30:49 crc kubenswrapper[4858]: I0202 17:30:49.558813 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"703d6256-20d4-45fc-9a4c-ec6970ea250d","Type":"ContainerStarted","Data":"ede679909a3fcfcef65b788e84fe3faf29143b54bebbbd5f54d2ab501830d6e4"} Feb 02 17:30:49 crc kubenswrapper[4858]: I0202 17:30:49.690180 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-h6kmt-config-4tcxc"] Feb 02 17:30:49 crc kubenswrapper[4858]: I0202 17:30:49.811400 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pmntb"] Feb 02 17:30:49 crc kubenswrapper[4858]: W0202 17:30:49.820389 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod235550bc_ede9_4da9_a2d0_08253c0d3a29.slice/crio-f208c4379c334de9ed3652411c2faa6a6a53d0ff48bfcfc47777fe4b76625c4c WatchSource:0}: Error finding container f208c4379c334de9ed3652411c2faa6a6a53d0ff48bfcfc47777fe4b76625c4c: Status 404 returned error can't find the container with id f208c4379c334de9ed3652411c2faa6a6a53d0ff48bfcfc47777fe4b76625c4c Feb 02 17:30:49 crc kubenswrapper[4858]: I0202 17:30:49.902915 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b2dtp"] Feb 02 17:30:49 crc kubenswrapper[4858]: W0202 17:30:49.905826 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0e14259_bf4e_47d1_952c_c17076756fd5.slice/crio-a14949e7d5f4ebe13d53a9ce540ff4925afea1f658f3acdf4aaa9a6f9cbf1bdb WatchSource:0}: Error finding container a14949e7d5f4ebe13d53a9ce540ff4925afea1f658f3acdf4aaa9a6f9cbf1bdb: Status 404 returned error can't find the container with id a14949e7d5f4ebe13d53a9ce540ff4925afea1f658f3acdf4aaa9a6f9cbf1bdb Feb 02 17:30:50 crc kubenswrapper[4858]: I0202 17:30:50.576598 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"703d6256-20d4-45fc-9a4c-ec6970ea250d","Type":"ContainerStarted","Data":"be2dfd98f265f26e5f86dfea3b21138c3fcf48e1042d540a5343ca1ecfe00bc9"} Feb 02 17:30:50 crc kubenswrapper[4858]: I0202 17:30:50.579758 4858 generic.go:334] "Generic (PLEG): container finished" podID="d0e14259-bf4e-47d1-952c-c17076756fd5" containerID="a290d78cfebda816319cd79bd34d4a5368187947e1f8a7976fd7673b4f5f5565" exitCode=0 Feb 02 17:30:50 crc kubenswrapper[4858]: I0202 17:30:50.579853 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b2dtp" event={"ID":"d0e14259-bf4e-47d1-952c-c17076756fd5","Type":"ContainerDied","Data":"a290d78cfebda816319cd79bd34d4a5368187947e1f8a7976fd7673b4f5f5565"} Feb 02 17:30:50 crc kubenswrapper[4858]: I0202 17:30:50.579883 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b2dtp" event={"ID":"d0e14259-bf4e-47d1-952c-c17076756fd5","Type":"ContainerStarted","Data":"a14949e7d5f4ebe13d53a9ce540ff4925afea1f658f3acdf4aaa9a6f9cbf1bdb"} Feb 02 17:30:50 crc kubenswrapper[4858]: I0202 17:30:50.590501 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-h6kmt-config-4tcxc" event={"ID":"2f64d810-ad37-4de4-8a0f-694ea8f03456","Type":"ContainerStarted","Data":"362934c06ec15d7fb4f6465121f97953b5f5ab94a1652af9b4f36d7fe3075e5a"} Feb 02 17:30:50 crc kubenswrapper[4858]: I0202 17:30:50.590544 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-h6kmt-config-4tcxc" event={"ID":"2f64d810-ad37-4de4-8a0f-694ea8f03456","Type":"ContainerStarted","Data":"c21d2b892c0e55a0b17de03a1bd3f30c53aa0800aeaba3ea4c9e04b29b368e82"} Feb 02 17:30:50 crc kubenswrapper[4858]: I0202 17:30:50.592570 4858 generic.go:334] "Generic (PLEG): container finished" podID="193d2b9e-dc31-4d42-971b-ff706ff40bb1" containerID="2b15d1c09c0ca2431258629272947923336475ad51087a5ddcfeac415e6c87ce" exitCode=0 Feb 02 17:30:50 crc kubenswrapper[4858]: I0202 17:30:50.592638 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z9n2d" event={"ID":"193d2b9e-dc31-4d42-971b-ff706ff40bb1","Type":"ContainerDied","Data":"2b15d1c09c0ca2431258629272947923336475ad51087a5ddcfeac415e6c87ce"} Feb 02 17:30:50 crc kubenswrapper[4858]: I0202 17:30:50.594575 4858 generic.go:334] "Generic (PLEG): container finished" podID="235550bc-ede9-4da9-a2d0-08253c0d3a29" containerID="05387fbc30d14850792220dd91943a894c2d2439ace70d0233f2abbd76d6eef9" exitCode=0 Feb 02 17:30:50 crc kubenswrapper[4858]: I0202 17:30:50.594618 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmntb" event={"ID":"235550bc-ede9-4da9-a2d0-08253c0d3a29","Type":"ContainerDied","Data":"05387fbc30d14850792220dd91943a894c2d2439ace70d0233f2abbd76d6eef9"} Feb 02 17:30:50 crc kubenswrapper[4858]: I0202 17:30:50.594633 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmntb" event={"ID":"235550bc-ede9-4da9-a2d0-08253c0d3a29","Type":"ContainerStarted","Data":"f208c4379c334de9ed3652411c2faa6a6a53d0ff48bfcfc47777fe4b76625c4c"} Feb 02 17:30:50 crc kubenswrapper[4858]: I0202 17:30:50.602186 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vpfb7" Feb 02 17:30:50 crc kubenswrapper[4858]: I0202 17:30:50.602225 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vpfb7" Feb 02 17:30:50 crc kubenswrapper[4858]: I0202 17:30:50.627746 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=26.60736314 podStartE2EDuration="38.627729907s" podCreationTimestamp="2026-02-02 17:30:12 +0000 UTC" firstStartedPulling="2026-02-02 17:30:30.102415921 +0000 UTC m=+931.254831186" lastFinishedPulling="2026-02-02 17:30:42.122782688 +0000 UTC m=+943.275197953" observedRunningTime="2026-02-02 17:30:50.623526608 +0000 UTC m=+951.775941883" watchObservedRunningTime="2026-02-02 17:30:50.627729907 +0000 UTC m=+951.780145172" Feb 02 17:30:50 crc kubenswrapper[4858]: I0202 17:30:50.701916 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vpfb7" Feb 02 17:30:50 crc kubenswrapper[4858]: I0202 17:30:50.951673 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-xvphk"] Feb 02 17:30:50 crc kubenswrapper[4858]: I0202 17:30:50.955177 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-xvphk" Feb 02 17:30:50 crc kubenswrapper[4858]: I0202 17:30:50.958772 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 02 17:30:50 crc kubenswrapper[4858]: I0202 17:30:50.966670 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-xvphk"] Feb 02 17:30:51 crc kubenswrapper[4858]: I0202 17:30:51.106704 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhcqs\" (UniqueName: \"kubernetes.io/projected/4a319bc8-c36c-410d-a23b-2f0aa98fd592-kube-api-access-zhcqs\") pod \"dnsmasq-dns-764c5664d7-xvphk\" (UID: \"4a319bc8-c36c-410d-a23b-2f0aa98fd592\") " pod="openstack/dnsmasq-dns-764c5664d7-xvphk" Feb 02 17:30:51 crc kubenswrapper[4858]: I0202 17:30:51.106776 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a319bc8-c36c-410d-a23b-2f0aa98fd592-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-xvphk\" (UID: \"4a319bc8-c36c-410d-a23b-2f0aa98fd592\") " pod="openstack/dnsmasq-dns-764c5664d7-xvphk" Feb 02 17:30:51 crc kubenswrapper[4858]: I0202 17:30:51.107352 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a319bc8-c36c-410d-a23b-2f0aa98fd592-dns-svc\") pod \"dnsmasq-dns-764c5664d7-xvphk\" (UID: \"4a319bc8-c36c-410d-a23b-2f0aa98fd592\") " pod="openstack/dnsmasq-dns-764c5664d7-xvphk" Feb 02 17:30:51 crc kubenswrapper[4858]: I0202 17:30:51.112212 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a319bc8-c36c-410d-a23b-2f0aa98fd592-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-xvphk\" (UID: \"4a319bc8-c36c-410d-a23b-2f0aa98fd592\") " pod="openstack/dnsmasq-dns-764c5664d7-xvphk" Feb 02 17:30:51 crc kubenswrapper[4858]: I0202 17:30:51.112297 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a319bc8-c36c-410d-a23b-2f0aa98fd592-config\") pod \"dnsmasq-dns-764c5664d7-xvphk\" (UID: \"4a319bc8-c36c-410d-a23b-2f0aa98fd592\") " pod="openstack/dnsmasq-dns-764c5664d7-xvphk" Feb 02 17:30:51 crc kubenswrapper[4858]: I0202 17:30:51.112406 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a319bc8-c36c-410d-a23b-2f0aa98fd592-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-xvphk\" (UID: \"4a319bc8-c36c-410d-a23b-2f0aa98fd592\") " pod="openstack/dnsmasq-dns-764c5664d7-xvphk" Feb 02 17:30:51 crc kubenswrapper[4858]: I0202 17:30:51.213765 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhcqs\" (UniqueName: \"kubernetes.io/projected/4a319bc8-c36c-410d-a23b-2f0aa98fd592-kube-api-access-zhcqs\") pod \"dnsmasq-dns-764c5664d7-xvphk\" (UID: \"4a319bc8-c36c-410d-a23b-2f0aa98fd592\") " pod="openstack/dnsmasq-dns-764c5664d7-xvphk" Feb 02 17:30:51 crc kubenswrapper[4858]: I0202 17:30:51.213831 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a319bc8-c36c-410d-a23b-2f0aa98fd592-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-xvphk\" (UID: \"4a319bc8-c36c-410d-a23b-2f0aa98fd592\") " pod="openstack/dnsmasq-dns-764c5664d7-xvphk" Feb 02 17:30:51 crc kubenswrapper[4858]: I0202 17:30:51.213870 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a319bc8-c36c-410d-a23b-2f0aa98fd592-dns-svc\") pod \"dnsmasq-dns-764c5664d7-xvphk\" (UID: \"4a319bc8-c36c-410d-a23b-2f0aa98fd592\") " pod="openstack/dnsmasq-dns-764c5664d7-xvphk" Feb 02 17:30:51 crc kubenswrapper[4858]: I0202 17:30:51.213910 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a319bc8-c36c-410d-a23b-2f0aa98fd592-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-xvphk\" (UID: \"4a319bc8-c36c-410d-a23b-2f0aa98fd592\") " pod="openstack/dnsmasq-dns-764c5664d7-xvphk" Feb 02 17:30:51 crc kubenswrapper[4858]: I0202 17:30:51.213932 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a319bc8-c36c-410d-a23b-2f0aa98fd592-config\") pod \"dnsmasq-dns-764c5664d7-xvphk\" (UID: \"4a319bc8-c36c-410d-a23b-2f0aa98fd592\") " pod="openstack/dnsmasq-dns-764c5664d7-xvphk" Feb 02 17:30:51 crc kubenswrapper[4858]: I0202 17:30:51.213961 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a319bc8-c36c-410d-a23b-2f0aa98fd592-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-xvphk\" (UID: \"4a319bc8-c36c-410d-a23b-2f0aa98fd592\") " pod="openstack/dnsmasq-dns-764c5664d7-xvphk" Feb 02 17:30:51 crc kubenswrapper[4858]: I0202 17:30:51.214951 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a319bc8-c36c-410d-a23b-2f0aa98fd592-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-xvphk\" (UID: \"4a319bc8-c36c-410d-a23b-2f0aa98fd592\") " pod="openstack/dnsmasq-dns-764c5664d7-xvphk" Feb 02 17:30:51 crc kubenswrapper[4858]: I0202 17:30:51.215079 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a319bc8-c36c-410d-a23b-2f0aa98fd592-config\") pod \"dnsmasq-dns-764c5664d7-xvphk\" (UID: \"4a319bc8-c36c-410d-a23b-2f0aa98fd592\") " pod="openstack/dnsmasq-dns-764c5664d7-xvphk" Feb 02 17:30:51 crc kubenswrapper[4858]: I0202 17:30:51.215180 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a319bc8-c36c-410d-a23b-2f0aa98fd592-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-xvphk\" (UID: \"4a319bc8-c36c-410d-a23b-2f0aa98fd592\") " pod="openstack/dnsmasq-dns-764c5664d7-xvphk" Feb 02 17:30:51 crc kubenswrapper[4858]: I0202 17:30:51.216004 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a319bc8-c36c-410d-a23b-2f0aa98fd592-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-xvphk\" (UID: \"4a319bc8-c36c-410d-a23b-2f0aa98fd592\") " pod="openstack/dnsmasq-dns-764c5664d7-xvphk" Feb 02 17:30:51 crc kubenswrapper[4858]: I0202 17:30:51.216305 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a319bc8-c36c-410d-a23b-2f0aa98fd592-dns-svc\") pod \"dnsmasq-dns-764c5664d7-xvphk\" (UID: \"4a319bc8-c36c-410d-a23b-2f0aa98fd592\") " pod="openstack/dnsmasq-dns-764c5664d7-xvphk" Feb 02 17:30:51 crc kubenswrapper[4858]: I0202 17:30:51.236464 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhcqs\" (UniqueName: \"kubernetes.io/projected/4a319bc8-c36c-410d-a23b-2f0aa98fd592-kube-api-access-zhcqs\") pod \"dnsmasq-dns-764c5664d7-xvphk\" (UID: \"4a319bc8-c36c-410d-a23b-2f0aa98fd592\") " pod="openstack/dnsmasq-dns-764c5664d7-xvphk" Feb 02 17:30:51 crc kubenswrapper[4858]: I0202 17:30:51.284109 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-xvphk" Feb 02 17:30:51 crc kubenswrapper[4858]: I0202 17:30:51.603993 4858 generic.go:334] "Generic (PLEG): container finished" podID="2f64d810-ad37-4de4-8a0f-694ea8f03456" containerID="362934c06ec15d7fb4f6465121f97953b5f5ab94a1652af9b4f36d7fe3075e5a" exitCode=0 Feb 02 17:30:51 crc kubenswrapper[4858]: I0202 17:30:51.604044 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-h6kmt-config-4tcxc" event={"ID":"2f64d810-ad37-4de4-8a0f-694ea8f03456","Type":"ContainerDied","Data":"362934c06ec15d7fb4f6465121f97953b5f5ab94a1652af9b4f36d7fe3075e5a"} Feb 02 17:30:51 crc kubenswrapper[4858]: I0202 17:30:51.606030 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z9n2d" event={"ID":"193d2b9e-dc31-4d42-971b-ff706ff40bb1","Type":"ContainerStarted","Data":"d96931cae6beb83b24dd3a0e4cff3c5db42e87d46301a56018944b5e9a41f732"} Feb 02 17:30:51 crc kubenswrapper[4858]: I0202 17:30:51.666011 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vpfb7" Feb 02 17:30:51 crc kubenswrapper[4858]: I0202 17:30:51.666626 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z9n2d" podStartSLOduration=2.857178053 podStartE2EDuration="11.666604986s" podCreationTimestamp="2026-02-02 17:30:40 +0000 UTC" firstStartedPulling="2026-02-02 17:30:42.438256592 +0000 UTC m=+943.590671857" lastFinishedPulling="2026-02-02 17:30:51.247683515 +0000 UTC m=+952.400098790" observedRunningTime="2026-02-02 17:30:51.663663393 +0000 UTC m=+952.816078678" watchObservedRunningTime="2026-02-02 17:30:51.666604986 +0000 UTC m=+952.819020251" Feb 02 17:30:51 crc kubenswrapper[4858]: I0202 17:30:51.716926 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-xvphk"] Feb 02 17:30:52 crc kubenswrapper[4858]: I0202 17:30:52.616675 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b2dtp" event={"ID":"d0e14259-bf4e-47d1-952c-c17076756fd5","Type":"ContainerStarted","Data":"13804e8dac9eba4684630a3d44b00b02724230a7c8ed82ea16d1c3dd774e6785"} Feb 02 17:30:52 crc kubenswrapper[4858]: I0202 17:30:52.621770 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-xvphk" event={"ID":"4a319bc8-c36c-410d-a23b-2f0aa98fd592","Type":"ContainerDied","Data":"40d48e69ebca237d91616f5280185e5a3e05e83532e9bdd5b47de07eb5ce5fc4"} Feb 02 17:30:52 crc kubenswrapper[4858]: I0202 17:30:52.621859 4858 generic.go:334] "Generic (PLEG): container finished" podID="4a319bc8-c36c-410d-a23b-2f0aa98fd592" containerID="40d48e69ebca237d91616f5280185e5a3e05e83532e9bdd5b47de07eb5ce5fc4" exitCode=0 Feb 02 17:30:52 crc kubenswrapper[4858]: I0202 17:30:52.621966 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-xvphk" event={"ID":"4a319bc8-c36c-410d-a23b-2f0aa98fd592","Type":"ContainerStarted","Data":"1735c071fdabdafbf22daae1451ef29a80a7753fc0096ed33ea4e51070dd8bf3"} Feb 02 17:30:52 crc kubenswrapper[4858]: I0202 17:30:52.626574 4858 generic.go:334] "Generic (PLEG): container finished" podID="235550bc-ede9-4da9-a2d0-08253c0d3a29" containerID="d411d2814a090cbe1078c467c0e3f16e55dbbc238e0be064f3e407364d02f674" exitCode=0 Feb 02 17:30:52 crc kubenswrapper[4858]: I0202 17:30:52.626675 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmntb" event={"ID":"235550bc-ede9-4da9-a2d0-08253c0d3a29","Type":"ContainerDied","Data":"d411d2814a090cbe1078c467c0e3f16e55dbbc238e0be064f3e407364d02f674"} Feb 02 17:30:52 crc kubenswrapper[4858]: I0202 17:30:52.914810 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-h6kmt-config-4tcxc" Feb 02 17:30:53 crc kubenswrapper[4858]: I0202 17:30:53.048093 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzkh8\" (UniqueName: \"kubernetes.io/projected/2f64d810-ad37-4de4-8a0f-694ea8f03456-kube-api-access-tzkh8\") pod \"2f64d810-ad37-4de4-8a0f-694ea8f03456\" (UID: \"2f64d810-ad37-4de4-8a0f-694ea8f03456\") " Feb 02 17:30:53 crc kubenswrapper[4858]: I0202 17:30:53.048233 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2f64d810-ad37-4de4-8a0f-694ea8f03456-additional-scripts\") pod \"2f64d810-ad37-4de4-8a0f-694ea8f03456\" (UID: \"2f64d810-ad37-4de4-8a0f-694ea8f03456\") " Feb 02 17:30:53 crc kubenswrapper[4858]: I0202 17:30:53.048276 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2f64d810-ad37-4de4-8a0f-694ea8f03456-var-run\") pod \"2f64d810-ad37-4de4-8a0f-694ea8f03456\" (UID: \"2f64d810-ad37-4de4-8a0f-694ea8f03456\") " Feb 02 17:30:53 crc kubenswrapper[4858]: I0202 17:30:53.048327 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f64d810-ad37-4de4-8a0f-694ea8f03456-scripts\") pod \"2f64d810-ad37-4de4-8a0f-694ea8f03456\" (UID: \"2f64d810-ad37-4de4-8a0f-694ea8f03456\") " Feb 02 17:30:53 crc kubenswrapper[4858]: I0202 17:30:53.048390 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2f64d810-ad37-4de4-8a0f-694ea8f03456-var-run-ovn\") pod \"2f64d810-ad37-4de4-8a0f-694ea8f03456\" (UID: \"2f64d810-ad37-4de4-8a0f-694ea8f03456\") " Feb 02 17:30:53 crc kubenswrapper[4858]: I0202 17:30:53.048551 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2f64d810-ad37-4de4-8a0f-694ea8f03456-var-log-ovn\") pod \"2f64d810-ad37-4de4-8a0f-694ea8f03456\" (UID: \"2f64d810-ad37-4de4-8a0f-694ea8f03456\") " Feb 02 17:30:53 crc kubenswrapper[4858]: I0202 17:30:53.048369 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f64d810-ad37-4de4-8a0f-694ea8f03456-var-run" (OuterVolumeSpecName: "var-run") pod "2f64d810-ad37-4de4-8a0f-694ea8f03456" (UID: "2f64d810-ad37-4de4-8a0f-694ea8f03456"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 17:30:53 crc kubenswrapper[4858]: I0202 17:30:53.048509 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f64d810-ad37-4de4-8a0f-694ea8f03456-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "2f64d810-ad37-4de4-8a0f-694ea8f03456" (UID: "2f64d810-ad37-4de4-8a0f-694ea8f03456"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 17:30:53 crc kubenswrapper[4858]: I0202 17:30:53.048783 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f64d810-ad37-4de4-8a0f-694ea8f03456-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "2f64d810-ad37-4de4-8a0f-694ea8f03456" (UID: "2f64d810-ad37-4de4-8a0f-694ea8f03456"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 17:30:53 crc kubenswrapper[4858]: I0202 17:30:53.049067 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f64d810-ad37-4de4-8a0f-694ea8f03456-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "2f64d810-ad37-4de4-8a0f-694ea8f03456" (UID: "2f64d810-ad37-4de4-8a0f-694ea8f03456"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:30:53 crc kubenswrapper[4858]: I0202 17:30:53.049322 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f64d810-ad37-4de4-8a0f-694ea8f03456-scripts" (OuterVolumeSpecName: "scripts") pod "2f64d810-ad37-4de4-8a0f-694ea8f03456" (UID: "2f64d810-ad37-4de4-8a0f-694ea8f03456"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:30:53 crc kubenswrapper[4858]: I0202 17:30:53.050740 4858 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2f64d810-ad37-4de4-8a0f-694ea8f03456-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:53 crc kubenswrapper[4858]: I0202 17:30:53.050859 4858 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2f64d810-ad37-4de4-8a0f-694ea8f03456-var-run\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:53 crc kubenswrapper[4858]: I0202 17:30:53.050962 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f64d810-ad37-4de4-8a0f-694ea8f03456-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:53 crc kubenswrapper[4858]: I0202 17:30:53.051070 4858 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2f64d810-ad37-4de4-8a0f-694ea8f03456-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:53 crc kubenswrapper[4858]: I0202 17:30:53.051145 4858 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2f64d810-ad37-4de4-8a0f-694ea8f03456-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:53 crc kubenswrapper[4858]: I0202 17:30:53.052853 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f64d810-ad37-4de4-8a0f-694ea8f03456-kube-api-access-tzkh8" (OuterVolumeSpecName: "kube-api-access-tzkh8") pod "2f64d810-ad37-4de4-8a0f-694ea8f03456" (UID: "2f64d810-ad37-4de4-8a0f-694ea8f03456"). InnerVolumeSpecName "kube-api-access-tzkh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:30:53 crc kubenswrapper[4858]: I0202 17:30:53.152157 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzkh8\" (UniqueName: \"kubernetes.io/projected/2f64d810-ad37-4de4-8a0f-694ea8f03456-kube-api-access-tzkh8\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:53 crc kubenswrapper[4858]: I0202 17:30:53.639951 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmntb" event={"ID":"235550bc-ede9-4da9-a2d0-08253c0d3a29","Type":"ContainerStarted","Data":"94e0f61c9aa16fc8d8c0ece1da7390f3dec6b624a5a379d30b058db4ef3b516e"} Feb 02 17:30:53 crc kubenswrapper[4858]: I0202 17:30:53.643224 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-xvphk" event={"ID":"4a319bc8-c36c-410d-a23b-2f0aa98fd592","Type":"ContainerStarted","Data":"9081a790c5311972928eb597bc3bca2103103936f13d30384e3b64ad892791bc"} Feb 02 17:30:53 crc kubenswrapper[4858]: I0202 17:30:53.643355 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-xvphk" Feb 02 17:30:53 crc kubenswrapper[4858]: I0202 17:30:53.645804 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-h6kmt-config-4tcxc" event={"ID":"2f64d810-ad37-4de4-8a0f-694ea8f03456","Type":"ContainerDied","Data":"c21d2b892c0e55a0b17de03a1bd3f30c53aa0800aeaba3ea4c9e04b29b368e82"} Feb 02 17:30:53 crc kubenswrapper[4858]: I0202 17:30:53.645840 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-h6kmt-config-4tcxc" Feb 02 17:30:53 crc kubenswrapper[4858]: I0202 17:30:53.645847 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c21d2b892c0e55a0b17de03a1bd3f30c53aa0800aeaba3ea4c9e04b29b368e82" Feb 02 17:30:53 crc kubenswrapper[4858]: I0202 17:30:53.689355 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pmntb" podStartSLOduration=6.168587209 podStartE2EDuration="7.689332542s" podCreationTimestamp="2026-02-02 17:30:46 +0000 UTC" firstStartedPulling="2026-02-02 17:30:51.607649044 +0000 UTC m=+952.760064309" lastFinishedPulling="2026-02-02 17:30:53.128394377 +0000 UTC m=+954.280809642" observedRunningTime="2026-02-02 17:30:53.679207196 +0000 UTC m=+954.831622471" watchObservedRunningTime="2026-02-02 17:30:53.689332542 +0000 UTC m=+954.841747807" Feb 02 17:30:53 crc kubenswrapper[4858]: I0202 17:30:53.727487 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-xvphk" podStartSLOduration=3.727463257 podStartE2EDuration="3.727463257s" podCreationTimestamp="2026-02-02 17:30:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:30:53.717863186 +0000 UTC m=+954.870278501" watchObservedRunningTime="2026-02-02 17:30:53.727463257 +0000 UTC m=+954.879878522" Feb 02 17:30:53 crc kubenswrapper[4858]: I0202 17:30:53.992393 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-h6kmt-config-4tcxc"] Feb 02 17:30:54 crc kubenswrapper[4858]: I0202 17:30:54.002275 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-h6kmt-config-4tcxc"] Feb 02 17:30:54 crc kubenswrapper[4858]: I0202 17:30:54.412868 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f64d810-ad37-4de4-8a0f-694ea8f03456" path="/var/lib/kubelet/pods/2f64d810-ad37-4de4-8a0f-694ea8f03456/volumes" Feb 02 17:30:54 crc kubenswrapper[4858]: I0202 17:30:54.660066 4858 generic.go:334] "Generic (PLEG): container finished" podID="d0e14259-bf4e-47d1-952c-c17076756fd5" containerID="13804e8dac9eba4684630a3d44b00b02724230a7c8ed82ea16d1c3dd774e6785" exitCode=0 Feb 02 17:30:54 crc kubenswrapper[4858]: I0202 17:30:54.660232 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b2dtp" event={"ID":"d0e14259-bf4e-47d1-952c-c17076756fd5","Type":"ContainerDied","Data":"13804e8dac9eba4684630a3d44b00b02724230a7c8ed82ea16d1c3dd774e6785"} Feb 02 17:30:54 crc kubenswrapper[4858]: I0202 17:30:54.842615 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vpfb7"] Feb 02 17:30:54 crc kubenswrapper[4858]: I0202 17:30:54.843134 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vpfb7" podUID="920ebb7d-135e-4b6b-9f23-4649a33e8896" containerName="registry-server" containerID="cri-o://068ae0589b0c28ba85e119c5d2ba19080b6ba284ab34e08e6f48f630fc022b29" gracePeriod=2 Feb 02 17:30:55 crc kubenswrapper[4858]: I0202 17:30:55.317539 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vpfb7" Feb 02 17:30:55 crc kubenswrapper[4858]: I0202 17:30:55.423609 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/920ebb7d-135e-4b6b-9f23-4649a33e8896-catalog-content\") pod \"920ebb7d-135e-4b6b-9f23-4649a33e8896\" (UID: \"920ebb7d-135e-4b6b-9f23-4649a33e8896\") " Feb 02 17:30:55 crc kubenswrapper[4858]: I0202 17:30:55.424176 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krfb8\" (UniqueName: \"kubernetes.io/projected/920ebb7d-135e-4b6b-9f23-4649a33e8896-kube-api-access-krfb8\") pod \"920ebb7d-135e-4b6b-9f23-4649a33e8896\" (UID: \"920ebb7d-135e-4b6b-9f23-4649a33e8896\") " Feb 02 17:30:55 crc kubenswrapper[4858]: I0202 17:30:55.424233 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/920ebb7d-135e-4b6b-9f23-4649a33e8896-utilities\") pod \"920ebb7d-135e-4b6b-9f23-4649a33e8896\" (UID: \"920ebb7d-135e-4b6b-9f23-4649a33e8896\") " Feb 02 17:30:55 crc kubenswrapper[4858]: I0202 17:30:55.425095 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/920ebb7d-135e-4b6b-9f23-4649a33e8896-utilities" (OuterVolumeSpecName: "utilities") pod "920ebb7d-135e-4b6b-9f23-4649a33e8896" (UID: "920ebb7d-135e-4b6b-9f23-4649a33e8896"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:30:55 crc kubenswrapper[4858]: I0202 17:30:55.429330 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/920ebb7d-135e-4b6b-9f23-4649a33e8896-kube-api-access-krfb8" (OuterVolumeSpecName: "kube-api-access-krfb8") pod "920ebb7d-135e-4b6b-9f23-4649a33e8896" (UID: "920ebb7d-135e-4b6b-9f23-4649a33e8896"). InnerVolumeSpecName "kube-api-access-krfb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:30:55 crc kubenswrapper[4858]: I0202 17:30:55.519517 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/920ebb7d-135e-4b6b-9f23-4649a33e8896-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "920ebb7d-135e-4b6b-9f23-4649a33e8896" (UID: "920ebb7d-135e-4b6b-9f23-4649a33e8896"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:30:55 crc kubenswrapper[4858]: I0202 17:30:55.525796 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/920ebb7d-135e-4b6b-9f23-4649a33e8896-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:55 crc kubenswrapper[4858]: I0202 17:30:55.525926 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krfb8\" (UniqueName: \"kubernetes.io/projected/920ebb7d-135e-4b6b-9f23-4649a33e8896-kube-api-access-krfb8\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:55 crc kubenswrapper[4858]: I0202 17:30:55.526029 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/920ebb7d-135e-4b6b-9f23-4649a33e8896-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:55 crc kubenswrapper[4858]: I0202 17:30:55.671111 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b2dtp" event={"ID":"d0e14259-bf4e-47d1-952c-c17076756fd5","Type":"ContainerStarted","Data":"b080a9610b49d8d11666af825d11cc07a177aa34548a7c776a49840ba8ba85a5"} Feb 02 17:30:55 crc kubenswrapper[4858]: I0202 17:30:55.673617 4858 generic.go:334] "Generic (PLEG): container finished" podID="5cb04b10-2484-4c41-903c-d12ea9ab3600" containerID="a7c6251270468a60f8437db0f238d31075f1433aaa75c717687708698d74c638" exitCode=0 Feb 02 17:30:55 crc kubenswrapper[4858]: I0202 17:30:55.673702 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-cv9mb" event={"ID":"5cb04b10-2484-4c41-903c-d12ea9ab3600","Type":"ContainerDied","Data":"a7c6251270468a60f8437db0f238d31075f1433aaa75c717687708698d74c638"} Feb 02 17:30:55 crc kubenswrapper[4858]: I0202 17:30:55.677548 4858 generic.go:334] "Generic (PLEG): container finished" podID="920ebb7d-135e-4b6b-9f23-4649a33e8896" containerID="068ae0589b0c28ba85e119c5d2ba19080b6ba284ab34e08e6f48f630fc022b29" exitCode=0 Feb 02 17:30:55 crc kubenswrapper[4858]: I0202 17:30:55.677695 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpfb7" event={"ID":"920ebb7d-135e-4b6b-9f23-4649a33e8896","Type":"ContainerDied","Data":"068ae0589b0c28ba85e119c5d2ba19080b6ba284ab34e08e6f48f630fc022b29"} Feb 02 17:30:55 crc kubenswrapper[4858]: I0202 17:30:55.677762 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vpfb7" event={"ID":"920ebb7d-135e-4b6b-9f23-4649a33e8896","Type":"ContainerDied","Data":"86c4072aeb6c80ed779af3a7f45942c51c958922539212a374b7dfa22805ca43"} Feb 02 17:30:55 crc kubenswrapper[4858]: I0202 17:30:55.677797 4858 scope.go:117] "RemoveContainer" containerID="068ae0589b0c28ba85e119c5d2ba19080b6ba284ab34e08e6f48f630fc022b29" Feb 02 17:30:55 crc kubenswrapper[4858]: I0202 17:30:55.678052 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vpfb7" Feb 02 17:30:55 crc kubenswrapper[4858]: I0202 17:30:55.697560 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b2dtp" podStartSLOduration=4.224002759 podStartE2EDuration="7.697538477s" podCreationTimestamp="2026-02-02 17:30:48 +0000 UTC" firstStartedPulling="2026-02-02 17:30:51.608029334 +0000 UTC m=+952.760444599" lastFinishedPulling="2026-02-02 17:30:55.081565062 +0000 UTC m=+956.233980317" observedRunningTime="2026-02-02 17:30:55.691695783 +0000 UTC m=+956.844111058" watchObservedRunningTime="2026-02-02 17:30:55.697538477 +0000 UTC m=+956.849953742" Feb 02 17:30:55 crc kubenswrapper[4858]: I0202 17:30:55.730306 4858 scope.go:117] "RemoveContainer" containerID="c851d14184ee121b784ba7af2bcdb41985e72ead5d58397597e3baa093b586a6" Feb 02 17:30:55 crc kubenswrapper[4858]: I0202 17:30:55.741808 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vpfb7"] Feb 02 17:30:55 crc kubenswrapper[4858]: I0202 17:30:55.750951 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vpfb7"] Feb 02 17:30:55 crc kubenswrapper[4858]: I0202 17:30:55.762729 4858 scope.go:117] "RemoveContainer" containerID="a4b5315f173af3b098819db9b8431e6e2de941df65b5a77a929f46ec2893d3ae" Feb 02 17:30:55 crc kubenswrapper[4858]: I0202 17:30:55.780444 4858 scope.go:117] "RemoveContainer" containerID="068ae0589b0c28ba85e119c5d2ba19080b6ba284ab34e08e6f48f630fc022b29" Feb 02 17:30:55 crc kubenswrapper[4858]: E0202 17:30:55.780844 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"068ae0589b0c28ba85e119c5d2ba19080b6ba284ab34e08e6f48f630fc022b29\": container with ID starting with 068ae0589b0c28ba85e119c5d2ba19080b6ba284ab34e08e6f48f630fc022b29 not found: ID does not exist" containerID="068ae0589b0c28ba85e119c5d2ba19080b6ba284ab34e08e6f48f630fc022b29" Feb 02 17:30:55 crc kubenswrapper[4858]: I0202 17:30:55.780946 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"068ae0589b0c28ba85e119c5d2ba19080b6ba284ab34e08e6f48f630fc022b29"} err="failed to get container status \"068ae0589b0c28ba85e119c5d2ba19080b6ba284ab34e08e6f48f630fc022b29\": rpc error: code = NotFound desc = could not find container \"068ae0589b0c28ba85e119c5d2ba19080b6ba284ab34e08e6f48f630fc022b29\": container with ID starting with 068ae0589b0c28ba85e119c5d2ba19080b6ba284ab34e08e6f48f630fc022b29 not found: ID does not exist" Feb 02 17:30:55 crc kubenswrapper[4858]: I0202 17:30:55.781122 4858 scope.go:117] "RemoveContainer" containerID="c851d14184ee121b784ba7af2bcdb41985e72ead5d58397597e3baa093b586a6" Feb 02 17:30:55 crc kubenswrapper[4858]: E0202 17:30:55.781587 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c851d14184ee121b784ba7af2bcdb41985e72ead5d58397597e3baa093b586a6\": container with ID starting with c851d14184ee121b784ba7af2bcdb41985e72ead5d58397597e3baa093b586a6 not found: ID does not exist" containerID="c851d14184ee121b784ba7af2bcdb41985e72ead5d58397597e3baa093b586a6" Feb 02 17:30:55 crc kubenswrapper[4858]: I0202 17:30:55.781629 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c851d14184ee121b784ba7af2bcdb41985e72ead5d58397597e3baa093b586a6"} err="failed to get container status \"c851d14184ee121b784ba7af2bcdb41985e72ead5d58397597e3baa093b586a6\": rpc error: code = NotFound desc = could not find container \"c851d14184ee121b784ba7af2bcdb41985e72ead5d58397597e3baa093b586a6\": container with ID starting with c851d14184ee121b784ba7af2bcdb41985e72ead5d58397597e3baa093b586a6 not found: ID does not exist" Feb 02 17:30:55 crc kubenswrapper[4858]: I0202 17:30:55.781659 4858 scope.go:117] "RemoveContainer" containerID="a4b5315f173af3b098819db9b8431e6e2de941df65b5a77a929f46ec2893d3ae" Feb 02 17:30:55 crc kubenswrapper[4858]: E0202 17:30:55.782163 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4b5315f173af3b098819db9b8431e6e2de941df65b5a77a929f46ec2893d3ae\": container with ID starting with a4b5315f173af3b098819db9b8431e6e2de941df65b5a77a929f46ec2893d3ae not found: ID does not exist" containerID="a4b5315f173af3b098819db9b8431e6e2de941df65b5a77a929f46ec2893d3ae" Feb 02 17:30:55 crc kubenswrapper[4858]: I0202 17:30:55.782215 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4b5315f173af3b098819db9b8431e6e2de941df65b5a77a929f46ec2893d3ae"} err="failed to get container status \"a4b5315f173af3b098819db9b8431e6e2de941df65b5a77a929f46ec2893d3ae\": rpc error: code = NotFound desc = could not find container \"a4b5315f173af3b098819db9b8431e6e2de941df65b5a77a929f46ec2893d3ae\": container with ID starting with a4b5315f173af3b098819db9b8431e6e2de941df65b5a77a929f46ec2893d3ae not found: ID does not exist" Feb 02 17:30:56 crc kubenswrapper[4858]: I0202 17:30:56.411939 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="920ebb7d-135e-4b6b-9f23-4649a33e8896" path="/var/lib/kubelet/pods/920ebb7d-135e-4b6b-9f23-4649a33e8896/volumes" Feb 02 17:30:56 crc kubenswrapper[4858]: I0202 17:30:56.595134 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pmntb" Feb 02 17:30:56 crc kubenswrapper[4858]: I0202 17:30:56.596480 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pmntb" Feb 02 17:30:56 crc kubenswrapper[4858]: I0202 17:30:56.671341 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pmntb" Feb 02 17:30:56 crc kubenswrapper[4858]: I0202 17:30:56.774182 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.136599 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-jmm6p"] Feb 02 17:30:57 crc kubenswrapper[4858]: E0202 17:30:57.137185 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="920ebb7d-135e-4b6b-9f23-4649a33e8896" containerName="extract-utilities" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.137205 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="920ebb7d-135e-4b6b-9f23-4649a33e8896" containerName="extract-utilities" Feb 02 17:30:57 crc kubenswrapper[4858]: E0202 17:30:57.137227 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f64d810-ad37-4de4-8a0f-694ea8f03456" containerName="ovn-config" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.137233 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f64d810-ad37-4de4-8a0f-694ea8f03456" containerName="ovn-config" Feb 02 17:30:57 crc kubenswrapper[4858]: E0202 17:30:57.137244 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="920ebb7d-135e-4b6b-9f23-4649a33e8896" containerName="registry-server" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.137251 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="920ebb7d-135e-4b6b-9f23-4649a33e8896" containerName="registry-server" Feb 02 17:30:57 crc kubenswrapper[4858]: E0202 17:30:57.137265 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="920ebb7d-135e-4b6b-9f23-4649a33e8896" containerName="extract-content" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.137270 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="920ebb7d-135e-4b6b-9f23-4649a33e8896" containerName="extract-content" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.137419 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f64d810-ad37-4de4-8a0f-694ea8f03456" containerName="ovn-config" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.137440 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="920ebb7d-135e-4b6b-9f23-4649a33e8896" containerName="registry-server" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.142872 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jmm6p" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.153930 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-cv9mb" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.156476 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-jmm6p"] Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.178607 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.258437 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cb04b10-2484-4c41-903c-d12ea9ab3600-combined-ca-bundle\") pod \"5cb04b10-2484-4c41-903c-d12ea9ab3600\" (UID: \"5cb04b10-2484-4c41-903c-d12ea9ab3600\") " Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.258496 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cb04b10-2484-4c41-903c-d12ea9ab3600-config-data\") pod \"5cb04b10-2484-4c41-903c-d12ea9ab3600\" (UID: \"5cb04b10-2484-4c41-903c-d12ea9ab3600\") " Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.258690 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6krd\" (UniqueName: \"kubernetes.io/projected/5cb04b10-2484-4c41-903c-d12ea9ab3600-kube-api-access-c6krd\") pod \"5cb04b10-2484-4c41-903c-d12ea9ab3600\" (UID: \"5cb04b10-2484-4c41-903c-d12ea9ab3600\") " Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.258725 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5cb04b10-2484-4c41-903c-d12ea9ab3600-db-sync-config-data\") pod \"5cb04b10-2484-4c41-903c-d12ea9ab3600\" (UID: \"5cb04b10-2484-4c41-903c-d12ea9ab3600\") " Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.259049 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/158f29d9-d8c9-47ee-912c-05108d7bec02-operator-scripts\") pod \"barbican-db-create-jmm6p\" (UID: \"158f29d9-d8c9-47ee-912c-05108d7bec02\") " pod="openstack/barbican-db-create-jmm6p" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.259140 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6fwp\" (UniqueName: \"kubernetes.io/projected/158f29d9-d8c9-47ee-912c-05108d7bec02-kube-api-access-x6fwp\") pod \"barbican-db-create-jmm6p\" (UID: \"158f29d9-d8c9-47ee-912c-05108d7bec02\") " pod="openstack/barbican-db-create-jmm6p" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.268051 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-ps6sb"] Feb 02 17:30:57 crc kubenswrapper[4858]: E0202 17:30:57.268884 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cb04b10-2484-4c41-903c-d12ea9ab3600" containerName="glance-db-sync" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.268906 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cb04b10-2484-4c41-903c-d12ea9ab3600" containerName="glance-db-sync" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.269098 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cb04b10-2484-4c41-903c-d12ea9ab3600" containerName="glance-db-sync" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.269663 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ps6sb" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.292502 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cb04b10-2484-4c41-903c-d12ea9ab3600-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "5cb04b10-2484-4c41-903c-d12ea9ab3600" (UID: "5cb04b10-2484-4c41-903c-d12ea9ab3600"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.292577 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cb04b10-2484-4c41-903c-d12ea9ab3600-kube-api-access-c6krd" (OuterVolumeSpecName: "kube-api-access-c6krd") pod "5cb04b10-2484-4c41-903c-d12ea9ab3600" (UID: "5cb04b10-2484-4c41-903c-d12ea9ab3600"). InnerVolumeSpecName "kube-api-access-c6krd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.296074 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-ps6sb"] Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.322380 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cb04b10-2484-4c41-903c-d12ea9ab3600-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5cb04b10-2484-4c41-903c-d12ea9ab3600" (UID: "5cb04b10-2484-4c41-903c-d12ea9ab3600"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.351213 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-c58f-account-create-update-qp9nt"] Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.352627 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c58f-account-create-update-qp9nt" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.355788 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.360336 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6fwp\" (UniqueName: \"kubernetes.io/projected/158f29d9-d8c9-47ee-912c-05108d7bec02-kube-api-access-x6fwp\") pod \"barbican-db-create-jmm6p\" (UID: \"158f29d9-d8c9-47ee-912c-05108d7bec02\") " pod="openstack/barbican-db-create-jmm6p" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.360512 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/158f29d9-d8c9-47ee-912c-05108d7bec02-operator-scripts\") pod \"barbican-db-create-jmm6p\" (UID: \"158f29d9-d8c9-47ee-912c-05108d7bec02\") " pod="openstack/barbican-db-create-jmm6p" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.360562 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cb04b10-2484-4c41-903c-d12ea9ab3600-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.360575 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6krd\" (UniqueName: \"kubernetes.io/projected/5cb04b10-2484-4c41-903c-d12ea9ab3600-kube-api-access-c6krd\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.360585 4858 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5cb04b10-2484-4c41-903c-d12ea9ab3600-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.361328 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-c58f-account-create-update-qp9nt"] Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.361451 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/158f29d9-d8c9-47ee-912c-05108d7bec02-operator-scripts\") pod \"barbican-db-create-jmm6p\" (UID: \"158f29d9-d8c9-47ee-912c-05108d7bec02\") " pod="openstack/barbican-db-create-jmm6p" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.384390 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6fwp\" (UniqueName: \"kubernetes.io/projected/158f29d9-d8c9-47ee-912c-05108d7bec02-kube-api-access-x6fwp\") pod \"barbican-db-create-jmm6p\" (UID: \"158f29d9-d8c9-47ee-912c-05108d7bec02\") " pod="openstack/barbican-db-create-jmm6p" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.419368 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cb04b10-2484-4c41-903c-d12ea9ab3600-config-data" (OuterVolumeSpecName: "config-data") pod "5cb04b10-2484-4c41-903c-d12ea9ab3600" (UID: "5cb04b10-2484-4c41-903c-d12ea9ab3600"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.447122 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-cc1b-account-create-update-tbdcd"] Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.448213 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cc1b-account-create-update-tbdcd" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.449957 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.461397 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-cc1b-account-create-update-tbdcd"] Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.461497 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4449167-7d55-4675-9dd6-20094b472bd0-operator-scripts\") pod \"cinder-db-create-ps6sb\" (UID: \"a4449167-7d55-4675-9dd6-20094b472bd0\") " pod="openstack/cinder-db-create-ps6sb" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.461594 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4eb91469-c691-4b21-a5d3-e422d2d36cc3-operator-scripts\") pod \"barbican-c58f-account-create-update-qp9nt\" (UID: \"4eb91469-c691-4b21-a5d3-e422d2d36cc3\") " pod="openstack/barbican-c58f-account-create-update-qp9nt" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.461693 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbhsc\" (UniqueName: \"kubernetes.io/projected/4eb91469-c691-4b21-a5d3-e422d2d36cc3-kube-api-access-wbhsc\") pod \"barbican-c58f-account-create-update-qp9nt\" (UID: \"4eb91469-c691-4b21-a5d3-e422d2d36cc3\") " pod="openstack/barbican-c58f-account-create-update-qp9nt" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.461730 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzrjw\" (UniqueName: \"kubernetes.io/projected/a4449167-7d55-4675-9dd6-20094b472bd0-kube-api-access-fzrjw\") pod \"cinder-db-create-ps6sb\" (UID: \"a4449167-7d55-4675-9dd6-20094b472bd0\") " pod="openstack/cinder-db-create-ps6sb" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.461786 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cb04b10-2484-4c41-903c-d12ea9ab3600-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.473011 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jmm6p" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.523377 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-kljgl"] Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.524422 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-kljgl" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.530101 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-kljgl"] Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.565382 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4eb91469-c691-4b21-a5d3-e422d2d36cc3-operator-scripts\") pod \"barbican-c58f-account-create-update-qp9nt\" (UID: \"4eb91469-c691-4b21-a5d3-e422d2d36cc3\") " pod="openstack/barbican-c58f-account-create-update-qp9nt" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.565480 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a82f861-3468-4057-851d-05836166f30b-operator-scripts\") pod \"cinder-cc1b-account-create-update-tbdcd\" (UID: \"8a82f861-3468-4057-851d-05836166f30b\") " pod="openstack/cinder-cc1b-account-create-update-tbdcd" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.565530 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbhsc\" (UniqueName: \"kubernetes.io/projected/4eb91469-c691-4b21-a5d3-e422d2d36cc3-kube-api-access-wbhsc\") pod \"barbican-c58f-account-create-update-qp9nt\" (UID: \"4eb91469-c691-4b21-a5d3-e422d2d36cc3\") " pod="openstack/barbican-c58f-account-create-update-qp9nt" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.565593 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzrjw\" (UniqueName: \"kubernetes.io/projected/a4449167-7d55-4675-9dd6-20094b472bd0-kube-api-access-fzrjw\") pod \"cinder-db-create-ps6sb\" (UID: \"a4449167-7d55-4675-9dd6-20094b472bd0\") " pod="openstack/cinder-db-create-ps6sb" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.565633 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4449167-7d55-4675-9dd6-20094b472bd0-operator-scripts\") pod \"cinder-db-create-ps6sb\" (UID: \"a4449167-7d55-4675-9dd6-20094b472bd0\") " pod="openstack/cinder-db-create-ps6sb" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.565678 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mhnq\" (UniqueName: \"kubernetes.io/projected/8a82f861-3468-4057-851d-05836166f30b-kube-api-access-2mhnq\") pod \"cinder-cc1b-account-create-update-tbdcd\" (UID: \"8a82f861-3468-4057-851d-05836166f30b\") " pod="openstack/cinder-cc1b-account-create-update-tbdcd" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.566038 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4eb91469-c691-4b21-a5d3-e422d2d36cc3-operator-scripts\") pod \"barbican-c58f-account-create-update-qp9nt\" (UID: \"4eb91469-c691-4b21-a5d3-e422d2d36cc3\") " pod="openstack/barbican-c58f-account-create-update-qp9nt" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.566585 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4449167-7d55-4675-9dd6-20094b472bd0-operator-scripts\") pod \"cinder-db-create-ps6sb\" (UID: \"a4449167-7d55-4675-9dd6-20094b472bd0\") " pod="openstack/cinder-db-create-ps6sb" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.584808 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzrjw\" (UniqueName: \"kubernetes.io/projected/a4449167-7d55-4675-9dd6-20094b472bd0-kube-api-access-fzrjw\") pod \"cinder-db-create-ps6sb\" (UID: \"a4449167-7d55-4675-9dd6-20094b472bd0\") " pod="openstack/cinder-db-create-ps6sb" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.584808 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbhsc\" (UniqueName: \"kubernetes.io/projected/4eb91469-c691-4b21-a5d3-e422d2d36cc3-kube-api-access-wbhsc\") pod \"barbican-c58f-account-create-update-qp9nt\" (UID: \"4eb91469-c691-4b21-a5d3-e422d2d36cc3\") " pod="openstack/barbican-c58f-account-create-update-qp9nt" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.622132 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-n6lz8"] Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.623712 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-n6lz8" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.626053 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.626250 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7ztbs" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.626405 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.626527 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.638202 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-n6lz8"] Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.669319 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a82f861-3468-4057-851d-05836166f30b-operator-scripts\") pod \"cinder-cc1b-account-create-update-tbdcd\" (UID: \"8a82f861-3468-4057-851d-05836166f30b\") " pod="openstack/cinder-cc1b-account-create-update-tbdcd" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.669411 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w59mw\" (UniqueName: \"kubernetes.io/projected/509b5c9b-875d-410b-b427-d0ba51cf798c-kube-api-access-w59mw\") pod \"neutron-db-create-kljgl\" (UID: \"509b5c9b-875d-410b-b427-d0ba51cf798c\") " pod="openstack/neutron-db-create-kljgl" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.669498 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mhnq\" (UniqueName: \"kubernetes.io/projected/8a82f861-3468-4057-851d-05836166f30b-kube-api-access-2mhnq\") pod \"cinder-cc1b-account-create-update-tbdcd\" (UID: \"8a82f861-3468-4057-851d-05836166f30b\") " pod="openstack/cinder-cc1b-account-create-update-tbdcd" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.669527 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/509b5c9b-875d-410b-b427-d0ba51cf798c-operator-scripts\") pod \"neutron-db-create-kljgl\" (UID: \"509b5c9b-875d-410b-b427-d0ba51cf798c\") " pod="openstack/neutron-db-create-kljgl" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.670458 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a82f861-3468-4057-851d-05836166f30b-operator-scripts\") pod \"cinder-cc1b-account-create-update-tbdcd\" (UID: \"8a82f861-3468-4057-851d-05836166f30b\") " pod="openstack/cinder-cc1b-account-create-update-tbdcd" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.673103 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-86ad-account-create-update-s4qdw"] Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.674576 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86ad-account-create-update-s4qdw" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.682133 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.682858 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-86ad-account-create-update-s4qdw"] Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.689661 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mhnq\" (UniqueName: \"kubernetes.io/projected/8a82f861-3468-4057-851d-05836166f30b-kube-api-access-2mhnq\") pod \"cinder-cc1b-account-create-update-tbdcd\" (UID: \"8a82f861-3468-4057-851d-05836166f30b\") " pod="openstack/cinder-cc1b-account-create-update-tbdcd" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.758246 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-cv9mb" event={"ID":"5cb04b10-2484-4c41-903c-d12ea9ab3600","Type":"ContainerDied","Data":"de14d061bf892d7dbc63ba93e047bb51d209b2fa0235f5e0ba4618142a2cc9d7"} Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.758463 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de14d061bf892d7dbc63ba93e047bb51d209b2fa0235f5e0ba4618142a2cc9d7" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.758320 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-cv9mb" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.758896 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ps6sb" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.771015 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/668a330f-e46e-4be4-9d42-2a547988e82b-combined-ca-bundle\") pod \"keystone-db-sync-n6lz8\" (UID: \"668a330f-e46e-4be4-9d42-2a547988e82b\") " pod="openstack/keystone-db-sync-n6lz8" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.771250 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/907ec6b3-b751-400a-95ea-e69381ac7785-operator-scripts\") pod \"neutron-86ad-account-create-update-s4qdw\" (UID: \"907ec6b3-b751-400a-95ea-e69381ac7785\") " pod="openstack/neutron-86ad-account-create-update-s4qdw" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.771372 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx4qr\" (UniqueName: \"kubernetes.io/projected/668a330f-e46e-4be4-9d42-2a547988e82b-kube-api-access-lx4qr\") pod \"keystone-db-sync-n6lz8\" (UID: \"668a330f-e46e-4be4-9d42-2a547988e82b\") " pod="openstack/keystone-db-sync-n6lz8" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.771466 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w59mw\" (UniqueName: \"kubernetes.io/projected/509b5c9b-875d-410b-b427-d0ba51cf798c-kube-api-access-w59mw\") pod \"neutron-db-create-kljgl\" (UID: \"509b5c9b-875d-410b-b427-d0ba51cf798c\") " pod="openstack/neutron-db-create-kljgl" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.771565 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2h5n\" (UniqueName: \"kubernetes.io/projected/907ec6b3-b751-400a-95ea-e69381ac7785-kube-api-access-f2h5n\") pod \"neutron-86ad-account-create-update-s4qdw\" (UID: \"907ec6b3-b751-400a-95ea-e69381ac7785\") " pod="openstack/neutron-86ad-account-create-update-s4qdw" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.771652 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/668a330f-e46e-4be4-9d42-2a547988e82b-config-data\") pod \"keystone-db-sync-n6lz8\" (UID: \"668a330f-e46e-4be4-9d42-2a547988e82b\") " pod="openstack/keystone-db-sync-n6lz8" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.771725 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/509b5c9b-875d-410b-b427-d0ba51cf798c-operator-scripts\") pod \"neutron-db-create-kljgl\" (UID: \"509b5c9b-875d-410b-b427-d0ba51cf798c\") " pod="openstack/neutron-db-create-kljgl" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.779359 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c58f-account-create-update-qp9nt" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.779536 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/509b5c9b-875d-410b-b427-d0ba51cf798c-operator-scripts\") pod \"neutron-db-create-kljgl\" (UID: \"509b5c9b-875d-410b-b427-d0ba51cf798c\") " pod="openstack/neutron-db-create-kljgl" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.785576 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cc1b-account-create-update-tbdcd" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.798411 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w59mw\" (UniqueName: \"kubernetes.io/projected/509b5c9b-875d-410b-b427-d0ba51cf798c-kube-api-access-w59mw\") pod \"neutron-db-create-kljgl\" (UID: \"509b5c9b-875d-410b-b427-d0ba51cf798c\") " pod="openstack/neutron-db-create-kljgl" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.842748 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-kljgl" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.875047 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2h5n\" (UniqueName: \"kubernetes.io/projected/907ec6b3-b751-400a-95ea-e69381ac7785-kube-api-access-f2h5n\") pod \"neutron-86ad-account-create-update-s4qdw\" (UID: \"907ec6b3-b751-400a-95ea-e69381ac7785\") " pod="openstack/neutron-86ad-account-create-update-s4qdw" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.875096 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/668a330f-e46e-4be4-9d42-2a547988e82b-config-data\") pod \"keystone-db-sync-n6lz8\" (UID: \"668a330f-e46e-4be4-9d42-2a547988e82b\") " pod="openstack/keystone-db-sync-n6lz8" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.875147 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/668a330f-e46e-4be4-9d42-2a547988e82b-combined-ca-bundle\") pod \"keystone-db-sync-n6lz8\" (UID: \"668a330f-e46e-4be4-9d42-2a547988e82b\") " pod="openstack/keystone-db-sync-n6lz8" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.875174 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/907ec6b3-b751-400a-95ea-e69381ac7785-operator-scripts\") pod \"neutron-86ad-account-create-update-s4qdw\" (UID: \"907ec6b3-b751-400a-95ea-e69381ac7785\") " pod="openstack/neutron-86ad-account-create-update-s4qdw" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.875261 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx4qr\" (UniqueName: \"kubernetes.io/projected/668a330f-e46e-4be4-9d42-2a547988e82b-kube-api-access-lx4qr\") pod \"keystone-db-sync-n6lz8\" (UID: \"668a330f-e46e-4be4-9d42-2a547988e82b\") " pod="openstack/keystone-db-sync-n6lz8" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.878368 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/907ec6b3-b751-400a-95ea-e69381ac7785-operator-scripts\") pod \"neutron-86ad-account-create-update-s4qdw\" (UID: \"907ec6b3-b751-400a-95ea-e69381ac7785\") " pod="openstack/neutron-86ad-account-create-update-s4qdw" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.883691 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/668a330f-e46e-4be4-9d42-2a547988e82b-config-data\") pod \"keystone-db-sync-n6lz8\" (UID: \"668a330f-e46e-4be4-9d42-2a547988e82b\") " pod="openstack/keystone-db-sync-n6lz8" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.886510 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/668a330f-e46e-4be4-9d42-2a547988e82b-combined-ca-bundle\") pod \"keystone-db-sync-n6lz8\" (UID: \"668a330f-e46e-4be4-9d42-2a547988e82b\") " pod="openstack/keystone-db-sync-n6lz8" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.893305 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx4qr\" (UniqueName: \"kubernetes.io/projected/668a330f-e46e-4be4-9d42-2a547988e82b-kube-api-access-lx4qr\") pod \"keystone-db-sync-n6lz8\" (UID: \"668a330f-e46e-4be4-9d42-2a547988e82b\") " pod="openstack/keystone-db-sync-n6lz8" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.897153 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2h5n\" (UniqueName: \"kubernetes.io/projected/907ec6b3-b751-400a-95ea-e69381ac7785-kube-api-access-f2h5n\") pod \"neutron-86ad-account-create-update-s4qdw\" (UID: \"907ec6b3-b751-400a-95ea-e69381ac7785\") " pod="openstack/neutron-86ad-account-create-update-s4qdw" Feb 02 17:30:57 crc kubenswrapper[4858]: I0202 17:30:57.945343 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-n6lz8" Feb 02 17:30:58 crc kubenswrapper[4858]: I0202 17:30:58.003680 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86ad-account-create-update-s4qdw" Feb 02 17:30:58 crc kubenswrapper[4858]: I0202 17:30:58.091345 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-jmm6p"] Feb 02 17:30:58 crc kubenswrapper[4858]: I0202 17:30:58.121858 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-xvphk"] Feb 02 17:30:58 crc kubenswrapper[4858]: I0202 17:30:58.123373 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-xvphk" podUID="4a319bc8-c36c-410d-a23b-2f0aa98fd592" containerName="dnsmasq-dns" containerID="cri-o://9081a790c5311972928eb597bc3bca2103103936f13d30384e3b64ad892791bc" gracePeriod=10 Feb 02 17:30:58 crc kubenswrapper[4858]: I0202 17:30:58.141155 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-764c5664d7-xvphk" Feb 02 17:30:58 crc kubenswrapper[4858]: I0202 17:30:58.183027 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-txp59"] Feb 02 17:30:58 crc kubenswrapper[4858]: I0202 17:30:58.184390 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-txp59" Feb 02 17:30:58 crc kubenswrapper[4858]: I0202 17:30:58.200713 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-txp59"] Feb 02 17:30:58 crc kubenswrapper[4858]: I0202 17:30:58.295736 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zlfg\" (UniqueName: \"kubernetes.io/projected/1d9a40d6-2f6c-4c93-8191-d5dde87c136b-kube-api-access-8zlfg\") pod \"dnsmasq-dns-74f6bcbc87-txp59\" (UID: \"1d9a40d6-2f6c-4c93-8191-d5dde87c136b\") " pod="openstack/dnsmasq-dns-74f6bcbc87-txp59" Feb 02 17:30:58 crc kubenswrapper[4858]: I0202 17:30:58.295851 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d9a40d6-2f6c-4c93-8191-d5dde87c136b-config\") pod \"dnsmasq-dns-74f6bcbc87-txp59\" (UID: \"1d9a40d6-2f6c-4c93-8191-d5dde87c136b\") " pod="openstack/dnsmasq-dns-74f6bcbc87-txp59" Feb 02 17:30:58 crc kubenswrapper[4858]: I0202 17:30:58.295911 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d9a40d6-2f6c-4c93-8191-d5dde87c136b-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-txp59\" (UID: \"1d9a40d6-2f6c-4c93-8191-d5dde87c136b\") " pod="openstack/dnsmasq-dns-74f6bcbc87-txp59" Feb 02 17:30:58 crc kubenswrapper[4858]: I0202 17:30:58.295952 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d9a40d6-2f6c-4c93-8191-d5dde87c136b-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-txp59\" (UID: \"1d9a40d6-2f6c-4c93-8191-d5dde87c136b\") " pod="openstack/dnsmasq-dns-74f6bcbc87-txp59" Feb 02 17:30:58 crc kubenswrapper[4858]: I0202 17:30:58.296029 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d9a40d6-2f6c-4c93-8191-d5dde87c136b-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-txp59\" (UID: \"1d9a40d6-2f6c-4c93-8191-d5dde87c136b\") " pod="openstack/dnsmasq-dns-74f6bcbc87-txp59" Feb 02 17:30:58 crc kubenswrapper[4858]: I0202 17:30:58.296071 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d9a40d6-2f6c-4c93-8191-d5dde87c136b-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-txp59\" (UID: \"1d9a40d6-2f6c-4c93-8191-d5dde87c136b\") " pod="openstack/dnsmasq-dns-74f6bcbc87-txp59" Feb 02 17:30:58 crc kubenswrapper[4858]: I0202 17:30:58.397155 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d9a40d6-2f6c-4c93-8191-d5dde87c136b-config\") pod \"dnsmasq-dns-74f6bcbc87-txp59\" (UID: \"1d9a40d6-2f6c-4c93-8191-d5dde87c136b\") " pod="openstack/dnsmasq-dns-74f6bcbc87-txp59" Feb 02 17:30:58 crc kubenswrapper[4858]: I0202 17:30:58.397421 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d9a40d6-2f6c-4c93-8191-d5dde87c136b-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-txp59\" (UID: \"1d9a40d6-2f6c-4c93-8191-d5dde87c136b\") " pod="openstack/dnsmasq-dns-74f6bcbc87-txp59" Feb 02 17:30:58 crc kubenswrapper[4858]: I0202 17:30:58.397449 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d9a40d6-2f6c-4c93-8191-d5dde87c136b-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-txp59\" (UID: \"1d9a40d6-2f6c-4c93-8191-d5dde87c136b\") " pod="openstack/dnsmasq-dns-74f6bcbc87-txp59" Feb 02 17:30:58 crc kubenswrapper[4858]: I0202 17:30:58.397492 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d9a40d6-2f6c-4c93-8191-d5dde87c136b-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-txp59\" (UID: \"1d9a40d6-2f6c-4c93-8191-d5dde87c136b\") " pod="openstack/dnsmasq-dns-74f6bcbc87-txp59" Feb 02 17:30:58 crc kubenswrapper[4858]: I0202 17:30:58.397522 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d9a40d6-2f6c-4c93-8191-d5dde87c136b-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-txp59\" (UID: \"1d9a40d6-2f6c-4c93-8191-d5dde87c136b\") " pod="openstack/dnsmasq-dns-74f6bcbc87-txp59" Feb 02 17:30:58 crc kubenswrapper[4858]: I0202 17:30:58.397547 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zlfg\" (UniqueName: \"kubernetes.io/projected/1d9a40d6-2f6c-4c93-8191-d5dde87c136b-kube-api-access-8zlfg\") pod \"dnsmasq-dns-74f6bcbc87-txp59\" (UID: \"1d9a40d6-2f6c-4c93-8191-d5dde87c136b\") " pod="openstack/dnsmasq-dns-74f6bcbc87-txp59" Feb 02 17:30:58 crc kubenswrapper[4858]: I0202 17:30:58.398841 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d9a40d6-2f6c-4c93-8191-d5dde87c136b-config\") pod \"dnsmasq-dns-74f6bcbc87-txp59\" (UID: \"1d9a40d6-2f6c-4c93-8191-d5dde87c136b\") " pod="openstack/dnsmasq-dns-74f6bcbc87-txp59" Feb 02 17:30:58 crc kubenswrapper[4858]: I0202 17:30:58.399083 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d9a40d6-2f6c-4c93-8191-d5dde87c136b-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-txp59\" (UID: \"1d9a40d6-2f6c-4c93-8191-d5dde87c136b\") " pod="openstack/dnsmasq-dns-74f6bcbc87-txp59" Feb 02 17:30:58 crc kubenswrapper[4858]: I0202 17:30:58.399773 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d9a40d6-2f6c-4c93-8191-d5dde87c136b-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-txp59\" (UID: \"1d9a40d6-2f6c-4c93-8191-d5dde87c136b\") " pod="openstack/dnsmasq-dns-74f6bcbc87-txp59" Feb 02 17:30:58 crc kubenswrapper[4858]: I0202 17:30:58.399866 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d9a40d6-2f6c-4c93-8191-d5dde87c136b-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-txp59\" (UID: \"1d9a40d6-2f6c-4c93-8191-d5dde87c136b\") " pod="openstack/dnsmasq-dns-74f6bcbc87-txp59" Feb 02 17:30:58 crc kubenswrapper[4858]: I0202 17:30:58.400287 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d9a40d6-2f6c-4c93-8191-d5dde87c136b-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-txp59\" (UID: \"1d9a40d6-2f6c-4c93-8191-d5dde87c136b\") " pod="openstack/dnsmasq-dns-74f6bcbc87-txp59" Feb 02 17:30:58 crc kubenswrapper[4858]: I0202 17:30:58.421274 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zlfg\" (UniqueName: \"kubernetes.io/projected/1d9a40d6-2f6c-4c93-8191-d5dde87c136b-kube-api-access-8zlfg\") pod \"dnsmasq-dns-74f6bcbc87-txp59\" (UID: \"1d9a40d6-2f6c-4c93-8191-d5dde87c136b\") " pod="openstack/dnsmasq-dns-74f6bcbc87-txp59" Feb 02 17:30:58 crc kubenswrapper[4858]: I0202 17:30:58.505814 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-txp59" Feb 02 17:30:58 crc kubenswrapper[4858]: I0202 17:30:58.787745 4858 generic.go:334] "Generic (PLEG): container finished" podID="4a319bc8-c36c-410d-a23b-2f0aa98fd592" containerID="9081a790c5311972928eb597bc3bca2103103936f13d30384e3b64ad892791bc" exitCode=0 Feb 02 17:30:58 crc kubenswrapper[4858]: I0202 17:30:58.788104 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-xvphk" event={"ID":"4a319bc8-c36c-410d-a23b-2f0aa98fd592","Type":"ContainerDied","Data":"9081a790c5311972928eb597bc3bca2103103936f13d30384e3b64ad892791bc"} Feb 02 17:30:58 crc kubenswrapper[4858]: I0202 17:30:58.794459 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jmm6p" event={"ID":"158f29d9-d8c9-47ee-912c-05108d7bec02","Type":"ContainerStarted","Data":"1006f534a6c5a1f39109ab71d7511e7397d311ee5b16765bfe64f9869ee3d37b"} Feb 02 17:30:58 crc kubenswrapper[4858]: I0202 17:30:58.794503 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jmm6p" event={"ID":"158f29d9-d8c9-47ee-912c-05108d7bec02","Type":"ContainerStarted","Data":"33d051cf52235fc42d7e2c2e826ec5571239b6f51510a39b6b3c5adf593ea352"} Feb 02 17:30:58 crc kubenswrapper[4858]: I0202 17:30:58.818049 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-cc1b-account-create-update-tbdcd"] Feb 02 17:30:58 crc kubenswrapper[4858]: I0202 17:30:58.847665 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-ps6sb"] Feb 02 17:30:58 crc kubenswrapper[4858]: I0202 17:30:58.854357 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-jmm6p" podStartSLOduration=1.854338937 podStartE2EDuration="1.854338937s" podCreationTimestamp="2026-02-02 17:30:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:30:58.818415144 +0000 UTC m=+959.970830409" watchObservedRunningTime="2026-02-02 17:30:58.854338937 +0000 UTC m=+960.006754202" Feb 02 17:30:58 crc kubenswrapper[4858]: I0202 17:30:58.889323 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-n6lz8"] Feb 02 17:30:58 crc kubenswrapper[4858]: I0202 17:30:58.897677 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-kljgl"] Feb 02 17:30:58 crc kubenswrapper[4858]: I0202 17:30:58.914704 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-xvphk" Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.006143 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-c58f-account-create-update-qp9nt"] Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.007361 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a319bc8-c36c-410d-a23b-2f0aa98fd592-ovsdbserver-sb\") pod \"4a319bc8-c36c-410d-a23b-2f0aa98fd592\" (UID: \"4a319bc8-c36c-410d-a23b-2f0aa98fd592\") " Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.007407 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a319bc8-c36c-410d-a23b-2f0aa98fd592-dns-svc\") pod \"4a319bc8-c36c-410d-a23b-2f0aa98fd592\" (UID: \"4a319bc8-c36c-410d-a23b-2f0aa98fd592\") " Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.007434 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhcqs\" (UniqueName: \"kubernetes.io/projected/4a319bc8-c36c-410d-a23b-2f0aa98fd592-kube-api-access-zhcqs\") pod \"4a319bc8-c36c-410d-a23b-2f0aa98fd592\" (UID: \"4a319bc8-c36c-410d-a23b-2f0aa98fd592\") " Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.007514 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a319bc8-c36c-410d-a23b-2f0aa98fd592-dns-swift-storage-0\") pod \"4a319bc8-c36c-410d-a23b-2f0aa98fd592\" (UID: \"4a319bc8-c36c-410d-a23b-2f0aa98fd592\") " Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.007585 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a319bc8-c36c-410d-a23b-2f0aa98fd592-ovsdbserver-nb\") pod \"4a319bc8-c36c-410d-a23b-2f0aa98fd592\" (UID: \"4a319bc8-c36c-410d-a23b-2f0aa98fd592\") " Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.007607 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a319bc8-c36c-410d-a23b-2f0aa98fd592-config\") pod \"4a319bc8-c36c-410d-a23b-2f0aa98fd592\" (UID: \"4a319bc8-c36c-410d-a23b-2f0aa98fd592\") " Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.012330 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a319bc8-c36c-410d-a23b-2f0aa98fd592-kube-api-access-zhcqs" (OuterVolumeSpecName: "kube-api-access-zhcqs") pod "4a319bc8-c36c-410d-a23b-2f0aa98fd592" (UID: "4a319bc8-c36c-410d-a23b-2f0aa98fd592"). InnerVolumeSpecName "kube-api-access-zhcqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.021434 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-86ad-account-create-update-s4qdw"] Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.077278 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a319bc8-c36c-410d-a23b-2f0aa98fd592-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4a319bc8-c36c-410d-a23b-2f0aa98fd592" (UID: "4a319bc8-c36c-410d-a23b-2f0aa98fd592"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.100684 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a319bc8-c36c-410d-a23b-2f0aa98fd592-config" (OuterVolumeSpecName: "config") pod "4a319bc8-c36c-410d-a23b-2f0aa98fd592" (UID: "4a319bc8-c36c-410d-a23b-2f0aa98fd592"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.109818 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a319bc8-c36c-410d-a23b-2f0aa98fd592-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.109843 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a319bc8-c36c-410d-a23b-2f0aa98fd592-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.109853 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhcqs\" (UniqueName: \"kubernetes.io/projected/4a319bc8-c36c-410d-a23b-2f0aa98fd592-kube-api-access-zhcqs\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.120153 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a319bc8-c36c-410d-a23b-2f0aa98fd592-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4a319bc8-c36c-410d-a23b-2f0aa98fd592" (UID: "4a319bc8-c36c-410d-a23b-2f0aa98fd592"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.124885 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a319bc8-c36c-410d-a23b-2f0aa98fd592-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4a319bc8-c36c-410d-a23b-2f0aa98fd592" (UID: "4a319bc8-c36c-410d-a23b-2f0aa98fd592"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.154938 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a319bc8-c36c-410d-a23b-2f0aa98fd592-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4a319bc8-c36c-410d-a23b-2f0aa98fd592" (UID: "4a319bc8-c36c-410d-a23b-2f0aa98fd592"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.184647 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-txp59"] Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.214402 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a319bc8-c36c-410d-a23b-2f0aa98fd592-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.214437 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a319bc8-c36c-410d-a23b-2f0aa98fd592-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.214450 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a319bc8-c36c-410d-a23b-2f0aa98fd592-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.293303 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b2dtp" Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.293353 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b2dtp" Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.403474 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b2dtp" Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.803465 4858 generic.go:334] "Generic (PLEG): container finished" podID="158f29d9-d8c9-47ee-912c-05108d7bec02" containerID="1006f534a6c5a1f39109ab71d7511e7397d311ee5b16765bfe64f9869ee3d37b" exitCode=0 Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.803855 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jmm6p" event={"ID":"158f29d9-d8c9-47ee-912c-05108d7bec02","Type":"ContainerDied","Data":"1006f534a6c5a1f39109ab71d7511e7397d311ee5b16765bfe64f9869ee3d37b"} Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.806563 4858 generic.go:334] "Generic (PLEG): container finished" podID="4eb91469-c691-4b21-a5d3-e422d2d36cc3" containerID="035639d1ccc51a5f4036284ea2cfc6ceffcb99a7152c499bf1e7844b070fcc4f" exitCode=0 Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.806620 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c58f-account-create-update-qp9nt" event={"ID":"4eb91469-c691-4b21-a5d3-e422d2d36cc3","Type":"ContainerDied","Data":"035639d1ccc51a5f4036284ea2cfc6ceffcb99a7152c499bf1e7844b070fcc4f"} Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.806647 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c58f-account-create-update-qp9nt" event={"ID":"4eb91469-c691-4b21-a5d3-e422d2d36cc3","Type":"ContainerStarted","Data":"59aac32c167aa357428938831cadeee8edbdcb8e432e7aae4a3845f9927a9819"} Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.808277 4858 generic.go:334] "Generic (PLEG): container finished" podID="1d9a40d6-2f6c-4c93-8191-d5dde87c136b" containerID="b0ffbaff1e4b83a1016db892a70e29fd8a0f137a85dd400d47e8da89eb3e7abb" exitCode=0 Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.808395 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-txp59" event={"ID":"1d9a40d6-2f6c-4c93-8191-d5dde87c136b","Type":"ContainerDied","Data":"b0ffbaff1e4b83a1016db892a70e29fd8a0f137a85dd400d47e8da89eb3e7abb"} Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.808443 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-txp59" event={"ID":"1d9a40d6-2f6c-4c93-8191-d5dde87c136b","Type":"ContainerStarted","Data":"b936bfa9cd3ff3ae60a033d8e9f16337eae9bba04c2f32528a7a4cdb41b0cf90"} Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.812115 4858 generic.go:334] "Generic (PLEG): container finished" podID="a4449167-7d55-4675-9dd6-20094b472bd0" containerID="8a6f57c243bb6007ecee76ea14af508d63960756875d320370d8c3d35bebcae4" exitCode=0 Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.812200 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ps6sb" event={"ID":"a4449167-7d55-4675-9dd6-20094b472bd0","Type":"ContainerDied","Data":"8a6f57c243bb6007ecee76ea14af508d63960756875d320370d8c3d35bebcae4"} Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.812231 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ps6sb" event={"ID":"a4449167-7d55-4675-9dd6-20094b472bd0","Type":"ContainerStarted","Data":"7e0b046630711b08cb1d65d9062409ffda7325e96a7bd62b60421b4d28ca0f9e"} Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.816865 4858 generic.go:334] "Generic (PLEG): container finished" podID="509b5c9b-875d-410b-b427-d0ba51cf798c" containerID="4ecb52312985551098962a856e67798ee0de396267174ead4eb05e996dd98102" exitCode=0 Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.816962 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-kljgl" event={"ID":"509b5c9b-875d-410b-b427-d0ba51cf798c","Type":"ContainerDied","Data":"4ecb52312985551098962a856e67798ee0de396267174ead4eb05e996dd98102"} Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.817011 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-kljgl" event={"ID":"509b5c9b-875d-410b-b427-d0ba51cf798c","Type":"ContainerStarted","Data":"9084bf52aa0e043c2db4557528adcc6135222aada1eb1586177a8ce04d581cb5"} Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.819527 4858 generic.go:334] "Generic (PLEG): container finished" podID="8a82f861-3468-4057-851d-05836166f30b" containerID="034d05c68459e8bff73de37a62c72187f51a0b4a69827fe49454afefa42f13f2" exitCode=0 Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.819689 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cc1b-account-create-update-tbdcd" event={"ID":"8a82f861-3468-4057-851d-05836166f30b","Type":"ContainerDied","Data":"034d05c68459e8bff73de37a62c72187f51a0b4a69827fe49454afefa42f13f2"} Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.819886 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cc1b-account-create-update-tbdcd" event={"ID":"8a82f861-3468-4057-851d-05836166f30b","Type":"ContainerStarted","Data":"07b0bd0ad757bfb4f50647e8a6719760d694bf81b3aa72490eb78b8bf5eccc00"} Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.824654 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-n6lz8" event={"ID":"668a330f-e46e-4be4-9d42-2a547988e82b","Type":"ContainerStarted","Data":"6cf35149a85dfa0ad915c745505900f8f857c6be1e3c4a5157fa651c10046252"} Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.858332 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86ad-account-create-update-s4qdw" event={"ID":"907ec6b3-b751-400a-95ea-e69381ac7785","Type":"ContainerStarted","Data":"546d7bdbe1486b73dce958069c911d4d02dcc15467347a6cfa28b921fc78a38a"} Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.858406 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86ad-account-create-update-s4qdw" event={"ID":"907ec6b3-b751-400a-95ea-e69381ac7785","Type":"ContainerStarted","Data":"4753f8b41866e47dff3916bdb86977a59ca6b08107b1029c4bccd0c67297b920"} Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.868456 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-xvphk" event={"ID":"4a319bc8-c36c-410d-a23b-2f0aa98fd592","Type":"ContainerDied","Data":"1735c071fdabdafbf22daae1451ef29a80a7753fc0096ed33ea4e51070dd8bf3"} Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.868525 4858 scope.go:117] "RemoveContainer" containerID="9081a790c5311972928eb597bc3bca2103103936f13d30384e3b64ad892791bc" Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.868807 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-xvphk" Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.970051 4858 scope.go:117] "RemoveContainer" containerID="40d48e69ebca237d91616f5280185e5a3e05e83532e9bdd5b47de07eb5ce5fc4" Feb 02 17:30:59 crc kubenswrapper[4858]: I0202 17:30:59.996500 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-xvphk"] Feb 02 17:31:00 crc kubenswrapper[4858]: I0202 17:31:00.002642 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-xvphk"] Feb 02 17:31:00 crc kubenswrapper[4858]: I0202 17:31:00.413966 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a319bc8-c36c-410d-a23b-2f0aa98fd592" path="/var/lib/kubelet/pods/4a319bc8-c36c-410d-a23b-2f0aa98fd592/volumes" Feb 02 17:31:00 crc kubenswrapper[4858]: I0202 17:31:00.884137 4858 generic.go:334] "Generic (PLEG): container finished" podID="907ec6b3-b751-400a-95ea-e69381ac7785" containerID="546d7bdbe1486b73dce958069c911d4d02dcc15467347a6cfa28b921fc78a38a" exitCode=0 Feb 02 17:31:00 crc kubenswrapper[4858]: I0202 17:31:00.884210 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86ad-account-create-update-s4qdw" event={"ID":"907ec6b3-b751-400a-95ea-e69381ac7785","Type":"ContainerDied","Data":"546d7bdbe1486b73dce958069c911d4d02dcc15467347a6cfa28b921fc78a38a"} Feb 02 17:31:00 crc kubenswrapper[4858]: I0202 17:31:00.887864 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-txp59" event={"ID":"1d9a40d6-2f6c-4c93-8191-d5dde87c136b","Type":"ContainerStarted","Data":"1f18566c3943ba67d0f75f6297177d245130658c254368a67ee5bd06140ee0ae"} Feb 02 17:31:00 crc kubenswrapper[4858]: I0202 17:31:00.918407 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-txp59" podStartSLOduration=2.918391678 podStartE2EDuration="2.918391678s" podCreationTimestamp="2026-02-02 17:30:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:31:00.911848693 +0000 UTC m=+962.064263958" watchObservedRunningTime="2026-02-02 17:31:00.918391678 +0000 UTC m=+962.070806943" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.124142 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z9n2d" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.124190 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z9n2d" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.187889 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z9n2d" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.334013 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ps6sb" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.453321 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzrjw\" (UniqueName: \"kubernetes.io/projected/a4449167-7d55-4675-9dd6-20094b472bd0-kube-api-access-fzrjw\") pod \"a4449167-7d55-4675-9dd6-20094b472bd0\" (UID: \"a4449167-7d55-4675-9dd6-20094b472bd0\") " Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.453469 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4449167-7d55-4675-9dd6-20094b472bd0-operator-scripts\") pod \"a4449167-7d55-4675-9dd6-20094b472bd0\" (UID: \"a4449167-7d55-4675-9dd6-20094b472bd0\") " Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.454843 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4449167-7d55-4675-9dd6-20094b472bd0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a4449167-7d55-4675-9dd6-20094b472bd0" (UID: "a4449167-7d55-4675-9dd6-20094b472bd0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.462762 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4449167-7d55-4675-9dd6-20094b472bd0-kube-api-access-fzrjw" (OuterVolumeSpecName: "kube-api-access-fzrjw") pod "a4449167-7d55-4675-9dd6-20094b472bd0" (UID: "a4449167-7d55-4675-9dd6-20094b472bd0"). InnerVolumeSpecName "kube-api-access-fzrjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.556743 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzrjw\" (UniqueName: \"kubernetes.io/projected/a4449167-7d55-4675-9dd6-20094b472bd0-kube-api-access-fzrjw\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.556838 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4449167-7d55-4675-9dd6-20094b472bd0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.574137 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-kljgl" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.579211 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cc1b-account-create-update-tbdcd" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.589421 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86ad-account-create-update-s4qdw" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.604460 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c58f-account-create-update-qp9nt" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.611878 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jmm6p" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.660914 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a82f861-3468-4057-851d-05836166f30b-operator-scripts\") pod \"8a82f861-3468-4057-851d-05836166f30b\" (UID: \"8a82f861-3468-4057-851d-05836166f30b\") " Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.660963 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2h5n\" (UniqueName: \"kubernetes.io/projected/907ec6b3-b751-400a-95ea-e69381ac7785-kube-api-access-f2h5n\") pod \"907ec6b3-b751-400a-95ea-e69381ac7785\" (UID: \"907ec6b3-b751-400a-95ea-e69381ac7785\") " Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.661039 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/509b5c9b-875d-410b-b427-d0ba51cf798c-operator-scripts\") pod \"509b5c9b-875d-410b-b427-d0ba51cf798c\" (UID: \"509b5c9b-875d-410b-b427-d0ba51cf798c\") " Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.661103 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mhnq\" (UniqueName: \"kubernetes.io/projected/8a82f861-3468-4057-851d-05836166f30b-kube-api-access-2mhnq\") pod \"8a82f861-3468-4057-851d-05836166f30b\" (UID: \"8a82f861-3468-4057-851d-05836166f30b\") " Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.661139 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/907ec6b3-b751-400a-95ea-e69381ac7785-operator-scripts\") pod \"907ec6b3-b751-400a-95ea-e69381ac7785\" (UID: \"907ec6b3-b751-400a-95ea-e69381ac7785\") " Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.661161 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w59mw\" (UniqueName: \"kubernetes.io/projected/509b5c9b-875d-410b-b427-d0ba51cf798c-kube-api-access-w59mw\") pod \"509b5c9b-875d-410b-b427-d0ba51cf798c\" (UID: \"509b5c9b-875d-410b-b427-d0ba51cf798c\") " Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.662283 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/509b5c9b-875d-410b-b427-d0ba51cf798c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "509b5c9b-875d-410b-b427-d0ba51cf798c" (UID: "509b5c9b-875d-410b-b427-d0ba51cf798c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.662557 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a82f861-3468-4057-851d-05836166f30b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8a82f861-3468-4057-851d-05836166f30b" (UID: "8a82f861-3468-4057-851d-05836166f30b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.662822 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/907ec6b3-b751-400a-95ea-e69381ac7785-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "907ec6b3-b751-400a-95ea-e69381ac7785" (UID: "907ec6b3-b751-400a-95ea-e69381ac7785"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.672291 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a82f861-3468-4057-851d-05836166f30b-kube-api-access-2mhnq" (OuterVolumeSpecName: "kube-api-access-2mhnq") pod "8a82f861-3468-4057-851d-05836166f30b" (UID: "8a82f861-3468-4057-851d-05836166f30b"). InnerVolumeSpecName "kube-api-access-2mhnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.672743 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/907ec6b3-b751-400a-95ea-e69381ac7785-kube-api-access-f2h5n" (OuterVolumeSpecName: "kube-api-access-f2h5n") pod "907ec6b3-b751-400a-95ea-e69381ac7785" (UID: "907ec6b3-b751-400a-95ea-e69381ac7785"). InnerVolumeSpecName "kube-api-access-f2h5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.680368 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/509b5c9b-875d-410b-b427-d0ba51cf798c-kube-api-access-w59mw" (OuterVolumeSpecName: "kube-api-access-w59mw") pod "509b5c9b-875d-410b-b427-d0ba51cf798c" (UID: "509b5c9b-875d-410b-b427-d0ba51cf798c"). InnerVolumeSpecName "kube-api-access-w59mw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.764699 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4eb91469-c691-4b21-a5d3-e422d2d36cc3-operator-scripts\") pod \"4eb91469-c691-4b21-a5d3-e422d2d36cc3\" (UID: \"4eb91469-c691-4b21-a5d3-e422d2d36cc3\") " Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.764830 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbhsc\" (UniqueName: \"kubernetes.io/projected/4eb91469-c691-4b21-a5d3-e422d2d36cc3-kube-api-access-wbhsc\") pod \"4eb91469-c691-4b21-a5d3-e422d2d36cc3\" (UID: \"4eb91469-c691-4b21-a5d3-e422d2d36cc3\") " Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.764923 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6fwp\" (UniqueName: \"kubernetes.io/projected/158f29d9-d8c9-47ee-912c-05108d7bec02-kube-api-access-x6fwp\") pod \"158f29d9-d8c9-47ee-912c-05108d7bec02\" (UID: \"158f29d9-d8c9-47ee-912c-05108d7bec02\") " Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.764967 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/158f29d9-d8c9-47ee-912c-05108d7bec02-operator-scripts\") pod \"158f29d9-d8c9-47ee-912c-05108d7bec02\" (UID: \"158f29d9-d8c9-47ee-912c-05108d7bec02\") " Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.765341 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a82f861-3468-4057-851d-05836166f30b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.765358 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2h5n\" (UniqueName: \"kubernetes.io/projected/907ec6b3-b751-400a-95ea-e69381ac7785-kube-api-access-f2h5n\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.765372 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/509b5c9b-875d-410b-b427-d0ba51cf798c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.765385 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mhnq\" (UniqueName: \"kubernetes.io/projected/8a82f861-3468-4057-851d-05836166f30b-kube-api-access-2mhnq\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.765397 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/907ec6b3-b751-400a-95ea-e69381ac7785-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.765407 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w59mw\" (UniqueName: \"kubernetes.io/projected/509b5c9b-875d-410b-b427-d0ba51cf798c-kube-api-access-w59mw\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.766041 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/158f29d9-d8c9-47ee-912c-05108d7bec02-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "158f29d9-d8c9-47ee-912c-05108d7bec02" (UID: "158f29d9-d8c9-47ee-912c-05108d7bec02"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.767218 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eb91469-c691-4b21-a5d3-e422d2d36cc3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4eb91469-c691-4b21-a5d3-e422d2d36cc3" (UID: "4eb91469-c691-4b21-a5d3-e422d2d36cc3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.768565 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4eb91469-c691-4b21-a5d3-e422d2d36cc3-kube-api-access-wbhsc" (OuterVolumeSpecName: "kube-api-access-wbhsc") pod "4eb91469-c691-4b21-a5d3-e422d2d36cc3" (UID: "4eb91469-c691-4b21-a5d3-e422d2d36cc3"). InnerVolumeSpecName "kube-api-access-wbhsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.769958 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/158f29d9-d8c9-47ee-912c-05108d7bec02-kube-api-access-x6fwp" (OuterVolumeSpecName: "kube-api-access-x6fwp") pod "158f29d9-d8c9-47ee-912c-05108d7bec02" (UID: "158f29d9-d8c9-47ee-912c-05108d7bec02"). InnerVolumeSpecName "kube-api-access-x6fwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.867162 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4eb91469-c691-4b21-a5d3-e422d2d36cc3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.867233 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbhsc\" (UniqueName: \"kubernetes.io/projected/4eb91469-c691-4b21-a5d3-e422d2d36cc3-kube-api-access-wbhsc\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.867247 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6fwp\" (UniqueName: \"kubernetes.io/projected/158f29d9-d8c9-47ee-912c-05108d7bec02-kube-api-access-x6fwp\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.867257 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/158f29d9-d8c9-47ee-912c-05108d7bec02-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.896016 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-c58f-account-create-update-qp9nt" event={"ID":"4eb91469-c691-4b21-a5d3-e422d2d36cc3","Type":"ContainerDied","Data":"59aac32c167aa357428938831cadeee8edbdcb8e432e7aae4a3845f9927a9819"} Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.896062 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59aac32c167aa357428938831cadeee8edbdcb8e432e7aae4a3845f9927a9819" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.896137 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-c58f-account-create-update-qp9nt" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.910323 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-kljgl" event={"ID":"509b5c9b-875d-410b-b427-d0ba51cf798c","Type":"ContainerDied","Data":"9084bf52aa0e043c2db4557528adcc6135222aada1eb1586177a8ce04d581cb5"} Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.910384 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9084bf52aa0e043c2db4557528adcc6135222aada1eb1586177a8ce04d581cb5" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.910382 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-kljgl" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.911724 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jmm6p" event={"ID":"158f29d9-d8c9-47ee-912c-05108d7bec02","Type":"ContainerDied","Data":"33d051cf52235fc42d7e2c2e826ec5571239b6f51510a39b6b3c5adf593ea352"} Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.911763 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33d051cf52235fc42d7e2c2e826ec5571239b6f51510a39b6b3c5adf593ea352" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.911835 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jmm6p" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.918479 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86ad-account-create-update-s4qdw" event={"ID":"907ec6b3-b751-400a-95ea-e69381ac7785","Type":"ContainerDied","Data":"4753f8b41866e47dff3916bdb86977a59ca6b08107b1029c4bccd0c67297b920"} Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.918559 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4753f8b41866e47dff3916bdb86977a59ca6b08107b1029c4bccd0c67297b920" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.919193 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86ad-account-create-update-s4qdw" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.923411 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ps6sb" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.923411 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ps6sb" event={"ID":"a4449167-7d55-4675-9dd6-20094b472bd0","Type":"ContainerDied","Data":"7e0b046630711b08cb1d65d9062409ffda7325e96a7bd62b60421b4d28ca0f9e"} Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.923489 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e0b046630711b08cb1d65d9062409ffda7325e96a7bd62b60421b4d28ca0f9e" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.928409 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cc1b-account-create-update-tbdcd" event={"ID":"8a82f861-3468-4057-851d-05836166f30b","Type":"ContainerDied","Data":"07b0bd0ad757bfb4f50647e8a6719760d694bf81b3aa72490eb78b8bf5eccc00"} Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.928468 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07b0bd0ad757bfb4f50647e8a6719760d694bf81b3aa72490eb78b8bf5eccc00" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.928583 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cc1b-account-create-update-tbdcd" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.929711 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-txp59" Feb 02 17:31:01 crc kubenswrapper[4858]: I0202 17:31:01.987348 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z9n2d" Feb 02 17:31:03 crc kubenswrapper[4858]: I0202 17:31:03.242663 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z9n2d"] Feb 02 17:31:04 crc kubenswrapper[4858]: I0202 17:31:04.959235 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-n6lz8" event={"ID":"668a330f-e46e-4be4-9d42-2a547988e82b","Type":"ContainerStarted","Data":"70750dcfaf02200e8f69ffbc0988d57dd4837336eb2815c2b955cc860a72d598"} Feb 02 17:31:04 crc kubenswrapper[4858]: I0202 17:31:04.959432 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z9n2d" podUID="193d2b9e-dc31-4d42-971b-ff706ff40bb1" containerName="registry-server" containerID="cri-o://d96931cae6beb83b24dd3a0e4cff3c5db42e87d46301a56018944b5e9a41f732" gracePeriod=2 Feb 02 17:31:04 crc kubenswrapper[4858]: I0202 17:31:04.991523 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-n6lz8" podStartSLOduration=2.523402639 podStartE2EDuration="7.99150484s" podCreationTimestamp="2026-02-02 17:30:57 +0000 UTC" firstStartedPulling="2026-02-02 17:30:58.865912903 +0000 UTC m=+960.018328168" lastFinishedPulling="2026-02-02 17:31:04.334015104 +0000 UTC m=+965.486430369" observedRunningTime="2026-02-02 17:31:04.987383584 +0000 UTC m=+966.139798859" watchObservedRunningTime="2026-02-02 17:31:04.99150484 +0000 UTC m=+966.143920105" Feb 02 17:31:05 crc kubenswrapper[4858]: I0202 17:31:05.416238 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z9n2d" Feb 02 17:31:05 crc kubenswrapper[4858]: I0202 17:31:05.550046 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/193d2b9e-dc31-4d42-971b-ff706ff40bb1-utilities\") pod \"193d2b9e-dc31-4d42-971b-ff706ff40bb1\" (UID: \"193d2b9e-dc31-4d42-971b-ff706ff40bb1\") " Feb 02 17:31:05 crc kubenswrapper[4858]: I0202 17:31:05.550121 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/193d2b9e-dc31-4d42-971b-ff706ff40bb1-catalog-content\") pod \"193d2b9e-dc31-4d42-971b-ff706ff40bb1\" (UID: \"193d2b9e-dc31-4d42-971b-ff706ff40bb1\") " Feb 02 17:31:05 crc kubenswrapper[4858]: I0202 17:31:05.550158 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48h2d\" (UniqueName: \"kubernetes.io/projected/193d2b9e-dc31-4d42-971b-ff706ff40bb1-kube-api-access-48h2d\") pod \"193d2b9e-dc31-4d42-971b-ff706ff40bb1\" (UID: \"193d2b9e-dc31-4d42-971b-ff706ff40bb1\") " Feb 02 17:31:05 crc kubenswrapper[4858]: I0202 17:31:05.551279 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/193d2b9e-dc31-4d42-971b-ff706ff40bb1-utilities" (OuterVolumeSpecName: "utilities") pod "193d2b9e-dc31-4d42-971b-ff706ff40bb1" (UID: "193d2b9e-dc31-4d42-971b-ff706ff40bb1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:31:05 crc kubenswrapper[4858]: I0202 17:31:05.555446 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/193d2b9e-dc31-4d42-971b-ff706ff40bb1-kube-api-access-48h2d" (OuterVolumeSpecName: "kube-api-access-48h2d") pod "193d2b9e-dc31-4d42-971b-ff706ff40bb1" (UID: "193d2b9e-dc31-4d42-971b-ff706ff40bb1"). InnerVolumeSpecName "kube-api-access-48h2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:31:05 crc kubenswrapper[4858]: I0202 17:31:05.626346 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/193d2b9e-dc31-4d42-971b-ff706ff40bb1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "193d2b9e-dc31-4d42-971b-ff706ff40bb1" (UID: "193d2b9e-dc31-4d42-971b-ff706ff40bb1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:31:05 crc kubenswrapper[4858]: I0202 17:31:05.651821 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/193d2b9e-dc31-4d42-971b-ff706ff40bb1-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:05 crc kubenswrapper[4858]: I0202 17:31:05.651851 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/193d2b9e-dc31-4d42-971b-ff706ff40bb1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:05 crc kubenswrapper[4858]: I0202 17:31:05.651865 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48h2d\" (UniqueName: \"kubernetes.io/projected/193d2b9e-dc31-4d42-971b-ff706ff40bb1-kube-api-access-48h2d\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:05 crc kubenswrapper[4858]: I0202 17:31:05.969510 4858 generic.go:334] "Generic (PLEG): container finished" podID="193d2b9e-dc31-4d42-971b-ff706ff40bb1" containerID="d96931cae6beb83b24dd3a0e4cff3c5db42e87d46301a56018944b5e9a41f732" exitCode=0 Feb 02 17:31:05 crc kubenswrapper[4858]: I0202 17:31:05.969558 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z9n2d" event={"ID":"193d2b9e-dc31-4d42-971b-ff706ff40bb1","Type":"ContainerDied","Data":"d96931cae6beb83b24dd3a0e4cff3c5db42e87d46301a56018944b5e9a41f732"} Feb 02 17:31:05 crc kubenswrapper[4858]: I0202 17:31:05.970802 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z9n2d" event={"ID":"193d2b9e-dc31-4d42-971b-ff706ff40bb1","Type":"ContainerDied","Data":"c7eef3ee3c84ff95e80920239514951f8549aae18ffa5ffd1131bd346e91457a"} Feb 02 17:31:05 crc kubenswrapper[4858]: I0202 17:31:05.969682 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z9n2d" Feb 02 17:31:05 crc kubenswrapper[4858]: I0202 17:31:05.970855 4858 scope.go:117] "RemoveContainer" containerID="d96931cae6beb83b24dd3a0e4cff3c5db42e87d46301a56018944b5e9a41f732" Feb 02 17:31:05 crc kubenswrapper[4858]: I0202 17:31:05.999832 4858 scope.go:117] "RemoveContainer" containerID="2b15d1c09c0ca2431258629272947923336475ad51087a5ddcfeac415e6c87ce" Feb 02 17:31:06 crc kubenswrapper[4858]: I0202 17:31:06.031804 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z9n2d"] Feb 02 17:31:06 crc kubenswrapper[4858]: I0202 17:31:06.039064 4858 scope.go:117] "RemoveContainer" containerID="44146ec06a4a4f70430533c7b89e8e31584909318a823aaac79939e70ab0357f" Feb 02 17:31:06 crc kubenswrapper[4858]: I0202 17:31:06.039957 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-z9n2d"] Feb 02 17:31:06 crc kubenswrapper[4858]: I0202 17:31:06.064211 4858 scope.go:117] "RemoveContainer" containerID="d96931cae6beb83b24dd3a0e4cff3c5db42e87d46301a56018944b5e9a41f732" Feb 02 17:31:06 crc kubenswrapper[4858]: E0202 17:31:06.065107 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d96931cae6beb83b24dd3a0e4cff3c5db42e87d46301a56018944b5e9a41f732\": container with ID starting with d96931cae6beb83b24dd3a0e4cff3c5db42e87d46301a56018944b5e9a41f732 not found: ID does not exist" containerID="d96931cae6beb83b24dd3a0e4cff3c5db42e87d46301a56018944b5e9a41f732" Feb 02 17:31:06 crc kubenswrapper[4858]: I0202 17:31:06.065212 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d96931cae6beb83b24dd3a0e4cff3c5db42e87d46301a56018944b5e9a41f732"} err="failed to get container status \"d96931cae6beb83b24dd3a0e4cff3c5db42e87d46301a56018944b5e9a41f732\": rpc error: code = NotFound desc = could not find container \"d96931cae6beb83b24dd3a0e4cff3c5db42e87d46301a56018944b5e9a41f732\": container with ID starting with d96931cae6beb83b24dd3a0e4cff3c5db42e87d46301a56018944b5e9a41f732 not found: ID does not exist" Feb 02 17:31:06 crc kubenswrapper[4858]: I0202 17:31:06.065299 4858 scope.go:117] "RemoveContainer" containerID="2b15d1c09c0ca2431258629272947923336475ad51087a5ddcfeac415e6c87ce" Feb 02 17:31:06 crc kubenswrapper[4858]: E0202 17:31:06.065677 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b15d1c09c0ca2431258629272947923336475ad51087a5ddcfeac415e6c87ce\": container with ID starting with 2b15d1c09c0ca2431258629272947923336475ad51087a5ddcfeac415e6c87ce not found: ID does not exist" containerID="2b15d1c09c0ca2431258629272947923336475ad51087a5ddcfeac415e6c87ce" Feb 02 17:31:06 crc kubenswrapper[4858]: I0202 17:31:06.065712 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b15d1c09c0ca2431258629272947923336475ad51087a5ddcfeac415e6c87ce"} err="failed to get container status \"2b15d1c09c0ca2431258629272947923336475ad51087a5ddcfeac415e6c87ce\": rpc error: code = NotFound desc = could not find container \"2b15d1c09c0ca2431258629272947923336475ad51087a5ddcfeac415e6c87ce\": container with ID starting with 2b15d1c09c0ca2431258629272947923336475ad51087a5ddcfeac415e6c87ce not found: ID does not exist" Feb 02 17:31:06 crc kubenswrapper[4858]: I0202 17:31:06.065728 4858 scope.go:117] "RemoveContainer" containerID="44146ec06a4a4f70430533c7b89e8e31584909318a823aaac79939e70ab0357f" Feb 02 17:31:06 crc kubenswrapper[4858]: E0202 17:31:06.066000 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44146ec06a4a4f70430533c7b89e8e31584909318a823aaac79939e70ab0357f\": container with ID starting with 44146ec06a4a4f70430533c7b89e8e31584909318a823aaac79939e70ab0357f not found: ID does not exist" containerID="44146ec06a4a4f70430533c7b89e8e31584909318a823aaac79939e70ab0357f" Feb 02 17:31:06 crc kubenswrapper[4858]: I0202 17:31:06.066043 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44146ec06a4a4f70430533c7b89e8e31584909318a823aaac79939e70ab0357f"} err="failed to get container status \"44146ec06a4a4f70430533c7b89e8e31584909318a823aaac79939e70ab0357f\": rpc error: code = NotFound desc = could not find container \"44146ec06a4a4f70430533c7b89e8e31584909318a823aaac79939e70ab0357f\": container with ID starting with 44146ec06a4a4f70430533c7b89e8e31584909318a823aaac79939e70ab0357f not found: ID does not exist" Feb 02 17:31:06 crc kubenswrapper[4858]: I0202 17:31:06.413998 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="193d2b9e-dc31-4d42-971b-ff706ff40bb1" path="/var/lib/kubelet/pods/193d2b9e-dc31-4d42-971b-ff706ff40bb1/volumes" Feb 02 17:31:06 crc kubenswrapper[4858]: I0202 17:31:06.674831 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pmntb" Feb 02 17:31:08 crc kubenswrapper[4858]: I0202 17:31:08.001769 4858 generic.go:334] "Generic (PLEG): container finished" podID="668a330f-e46e-4be4-9d42-2a547988e82b" containerID="70750dcfaf02200e8f69ffbc0988d57dd4837336eb2815c2b955cc860a72d598" exitCode=0 Feb 02 17:31:08 crc kubenswrapper[4858]: I0202 17:31:08.001846 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-n6lz8" event={"ID":"668a330f-e46e-4be4-9d42-2a547988e82b","Type":"ContainerDied","Data":"70750dcfaf02200e8f69ffbc0988d57dd4837336eb2815c2b955cc860a72d598"} Feb 02 17:31:08 crc kubenswrapper[4858]: I0202 17:31:08.507172 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74f6bcbc87-txp59" Feb 02 17:31:08 crc kubenswrapper[4858]: I0202 17:31:08.569554 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-6qdtt"] Feb 02 17:31:08 crc kubenswrapper[4858]: I0202 17:31:08.569850 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-6qdtt" podUID="5350e65d-0f27-4ac0-9251-00ce22348491" containerName="dnsmasq-dns" containerID="cri-o://e0d357ff68dfc0c048c53fbd7b229d228bcd0d28988ef48b8396526b0fe205bc" gracePeriod=10 Feb 02 17:31:08 crc kubenswrapper[4858]: I0202 17:31:08.840705 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pmntb"] Feb 02 17:31:08 crc kubenswrapper[4858]: I0202 17:31:08.840964 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pmntb" podUID="235550bc-ede9-4da9-a2d0-08253c0d3a29" containerName="registry-server" containerID="cri-o://94e0f61c9aa16fc8d8c0ece1da7390f3dec6b624a5a379d30b058db4ef3b516e" gracePeriod=2 Feb 02 17:31:09 crc kubenswrapper[4858]: I0202 17:31:09.021212 4858 generic.go:334] "Generic (PLEG): container finished" podID="5350e65d-0f27-4ac0-9251-00ce22348491" containerID="e0d357ff68dfc0c048c53fbd7b229d228bcd0d28988ef48b8396526b0fe205bc" exitCode=0 Feb 02 17:31:09 crc kubenswrapper[4858]: I0202 17:31:09.021305 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-6qdtt" event={"ID":"5350e65d-0f27-4ac0-9251-00ce22348491","Type":"ContainerDied","Data":"e0d357ff68dfc0c048c53fbd7b229d228bcd0d28988ef48b8396526b0fe205bc"} Feb 02 17:31:09 crc kubenswrapper[4858]: I0202 17:31:09.021356 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-6qdtt" event={"ID":"5350e65d-0f27-4ac0-9251-00ce22348491","Type":"ContainerDied","Data":"fa78dd79bc7bb7c7948ab86ea162866a718f4f05cd03d0ceeaac8a678685ca04"} Feb 02 17:31:09 crc kubenswrapper[4858]: I0202 17:31:09.021370 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa78dd79bc7bb7c7948ab86ea162866a718f4f05cd03d0ceeaac8a678685ca04" Feb 02 17:31:09 crc kubenswrapper[4858]: I0202 17:31:09.023809 4858 generic.go:334] "Generic (PLEG): container finished" podID="235550bc-ede9-4da9-a2d0-08253c0d3a29" containerID="94e0f61c9aa16fc8d8c0ece1da7390f3dec6b624a5a379d30b058db4ef3b516e" exitCode=0 Feb 02 17:31:09 crc kubenswrapper[4858]: I0202 17:31:09.023860 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmntb" event={"ID":"235550bc-ede9-4da9-a2d0-08253c0d3a29","Type":"ContainerDied","Data":"94e0f61c9aa16fc8d8c0ece1da7390f3dec6b624a5a379d30b058db4ef3b516e"} Feb 02 17:31:09 crc kubenswrapper[4858]: I0202 17:31:09.043845 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-6qdtt" Feb 02 17:31:09 crc kubenswrapper[4858]: I0202 17:31:09.231636 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5350e65d-0f27-4ac0-9251-00ce22348491-dns-svc\") pod \"5350e65d-0f27-4ac0-9251-00ce22348491\" (UID: \"5350e65d-0f27-4ac0-9251-00ce22348491\") " Feb 02 17:31:09 crc kubenswrapper[4858]: I0202 17:31:09.231722 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5350e65d-0f27-4ac0-9251-00ce22348491-ovsdbserver-sb\") pod \"5350e65d-0f27-4ac0-9251-00ce22348491\" (UID: \"5350e65d-0f27-4ac0-9251-00ce22348491\") " Feb 02 17:31:09 crc kubenswrapper[4858]: I0202 17:31:09.231844 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5350e65d-0f27-4ac0-9251-00ce22348491-config\") pod \"5350e65d-0f27-4ac0-9251-00ce22348491\" (UID: \"5350e65d-0f27-4ac0-9251-00ce22348491\") " Feb 02 17:31:09 crc kubenswrapper[4858]: I0202 17:31:09.231869 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wprjb\" (UniqueName: \"kubernetes.io/projected/5350e65d-0f27-4ac0-9251-00ce22348491-kube-api-access-wprjb\") pod \"5350e65d-0f27-4ac0-9251-00ce22348491\" (UID: \"5350e65d-0f27-4ac0-9251-00ce22348491\") " Feb 02 17:31:09 crc kubenswrapper[4858]: I0202 17:31:09.231991 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5350e65d-0f27-4ac0-9251-00ce22348491-ovsdbserver-nb\") pod \"5350e65d-0f27-4ac0-9251-00ce22348491\" (UID: \"5350e65d-0f27-4ac0-9251-00ce22348491\") " Feb 02 17:31:09 crc kubenswrapper[4858]: I0202 17:31:09.258424 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5350e65d-0f27-4ac0-9251-00ce22348491-kube-api-access-wprjb" (OuterVolumeSpecName: "kube-api-access-wprjb") pod "5350e65d-0f27-4ac0-9251-00ce22348491" (UID: "5350e65d-0f27-4ac0-9251-00ce22348491"). InnerVolumeSpecName "kube-api-access-wprjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:31:09 crc kubenswrapper[4858]: I0202 17:31:09.287631 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5350e65d-0f27-4ac0-9251-00ce22348491-config" (OuterVolumeSpecName: "config") pod "5350e65d-0f27-4ac0-9251-00ce22348491" (UID: "5350e65d-0f27-4ac0-9251-00ce22348491"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:31:09 crc kubenswrapper[4858]: I0202 17:31:09.304636 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5350e65d-0f27-4ac0-9251-00ce22348491-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5350e65d-0f27-4ac0-9251-00ce22348491" (UID: "5350e65d-0f27-4ac0-9251-00ce22348491"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:31:09 crc kubenswrapper[4858]: I0202 17:31:09.330391 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5350e65d-0f27-4ac0-9251-00ce22348491-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5350e65d-0f27-4ac0-9251-00ce22348491" (UID: "5350e65d-0f27-4ac0-9251-00ce22348491"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:31:09 crc kubenswrapper[4858]: I0202 17:31:09.334687 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5350e65d-0f27-4ac0-9251-00ce22348491-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5350e65d-0f27-4ac0-9251-00ce22348491" (UID: "5350e65d-0f27-4ac0-9251-00ce22348491"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:31:09 crc kubenswrapper[4858]: I0202 17:31:09.338834 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b2dtp" Feb 02 17:31:09 crc kubenswrapper[4858]: I0202 17:31:09.345572 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5350e65d-0f27-4ac0-9251-00ce22348491-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:09 crc kubenswrapper[4858]: I0202 17:31:09.345610 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5350e65d-0f27-4ac0-9251-00ce22348491-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:09 crc kubenswrapper[4858]: I0202 17:31:09.345657 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5350e65d-0f27-4ac0-9251-00ce22348491-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:09 crc kubenswrapper[4858]: I0202 17:31:09.345967 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wprjb\" (UniqueName: \"kubernetes.io/projected/5350e65d-0f27-4ac0-9251-00ce22348491-kube-api-access-wprjb\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:09 crc kubenswrapper[4858]: I0202 17:31:09.346009 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5350e65d-0f27-4ac0-9251-00ce22348491-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:09 crc kubenswrapper[4858]: I0202 17:31:09.358628 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pmntb" Feb 02 17:31:09 crc kubenswrapper[4858]: I0202 17:31:09.371641 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-n6lz8" Feb 02 17:31:09 crc kubenswrapper[4858]: I0202 17:31:09.550366 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/235550bc-ede9-4da9-a2d0-08253c0d3a29-utilities\") pod \"235550bc-ede9-4da9-a2d0-08253c0d3a29\" (UID: \"235550bc-ede9-4da9-a2d0-08253c0d3a29\") " Feb 02 17:31:09 crc kubenswrapper[4858]: I0202 17:31:09.550430 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/668a330f-e46e-4be4-9d42-2a547988e82b-config-data\") pod \"668a330f-e46e-4be4-9d42-2a547988e82b\" (UID: \"668a330f-e46e-4be4-9d42-2a547988e82b\") " Feb 02 17:31:09 crc kubenswrapper[4858]: I0202 17:31:09.551257 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lx4qr\" (UniqueName: \"kubernetes.io/projected/668a330f-e46e-4be4-9d42-2a547988e82b-kube-api-access-lx4qr\") pod \"668a330f-e46e-4be4-9d42-2a547988e82b\" (UID: \"668a330f-e46e-4be4-9d42-2a547988e82b\") " Feb 02 17:31:09 crc kubenswrapper[4858]: I0202 17:31:09.551314 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cr2sw\" (UniqueName: \"kubernetes.io/projected/235550bc-ede9-4da9-a2d0-08253c0d3a29-kube-api-access-cr2sw\") pod \"235550bc-ede9-4da9-a2d0-08253c0d3a29\" (UID: \"235550bc-ede9-4da9-a2d0-08253c0d3a29\") " Feb 02 17:31:09 crc kubenswrapper[4858]: I0202 17:31:09.551412 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/668a330f-e46e-4be4-9d42-2a547988e82b-combined-ca-bundle\") pod \"668a330f-e46e-4be4-9d42-2a547988e82b\" (UID: \"668a330f-e46e-4be4-9d42-2a547988e82b\") " Feb 02 17:31:09 crc kubenswrapper[4858]: I0202 17:31:09.551445 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/235550bc-ede9-4da9-a2d0-08253c0d3a29-catalog-content\") pod \"235550bc-ede9-4da9-a2d0-08253c0d3a29\" (UID: \"235550bc-ede9-4da9-a2d0-08253c0d3a29\") " Feb 02 17:31:09 crc kubenswrapper[4858]: I0202 17:31:09.554741 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/235550bc-ede9-4da9-a2d0-08253c0d3a29-utilities" (OuterVolumeSpecName: "utilities") pod "235550bc-ede9-4da9-a2d0-08253c0d3a29" (UID: "235550bc-ede9-4da9-a2d0-08253c0d3a29"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:31:09 crc kubenswrapper[4858]: I0202 17:31:09.560249 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/668a330f-e46e-4be4-9d42-2a547988e82b-kube-api-access-lx4qr" (OuterVolumeSpecName: "kube-api-access-lx4qr") pod "668a330f-e46e-4be4-9d42-2a547988e82b" (UID: "668a330f-e46e-4be4-9d42-2a547988e82b"). InnerVolumeSpecName "kube-api-access-lx4qr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:31:09 crc kubenswrapper[4858]: I0202 17:31:09.563009 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/235550bc-ede9-4da9-a2d0-08253c0d3a29-kube-api-access-cr2sw" (OuterVolumeSpecName: "kube-api-access-cr2sw") pod "235550bc-ede9-4da9-a2d0-08253c0d3a29" (UID: "235550bc-ede9-4da9-a2d0-08253c0d3a29"). InnerVolumeSpecName "kube-api-access-cr2sw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:31:09 crc kubenswrapper[4858]: I0202 17:31:09.574779 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/668a330f-e46e-4be4-9d42-2a547988e82b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "668a330f-e46e-4be4-9d42-2a547988e82b" (UID: "668a330f-e46e-4be4-9d42-2a547988e82b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:31:09 crc kubenswrapper[4858]: I0202 17:31:09.578920 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/235550bc-ede9-4da9-a2d0-08253c0d3a29-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "235550bc-ede9-4da9-a2d0-08253c0d3a29" (UID: "235550bc-ede9-4da9-a2d0-08253c0d3a29"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:31:09 crc kubenswrapper[4858]: I0202 17:31:09.591748 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/668a330f-e46e-4be4-9d42-2a547988e82b-config-data" (OuterVolumeSpecName: "config-data") pod "668a330f-e46e-4be4-9d42-2a547988e82b" (UID: "668a330f-e46e-4be4-9d42-2a547988e82b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:31:09 crc kubenswrapper[4858]: I0202 17:31:09.654274 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lx4qr\" (UniqueName: \"kubernetes.io/projected/668a330f-e46e-4be4-9d42-2a547988e82b-kube-api-access-lx4qr\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:09 crc kubenswrapper[4858]: I0202 17:31:09.654316 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cr2sw\" (UniqueName: \"kubernetes.io/projected/235550bc-ede9-4da9-a2d0-08253c0d3a29-kube-api-access-cr2sw\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:09 crc kubenswrapper[4858]: I0202 17:31:09.654332 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/668a330f-e46e-4be4-9d42-2a547988e82b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:09 crc kubenswrapper[4858]: I0202 17:31:09.654344 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/235550bc-ede9-4da9-a2d0-08253c0d3a29-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:09 crc kubenswrapper[4858]: I0202 17:31:09.654357 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/235550bc-ede9-4da9-a2d0-08253c0d3a29-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:09 crc kubenswrapper[4858]: I0202 17:31:09.654368 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/668a330f-e46e-4be4-9d42-2a547988e82b-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.034422 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmntb" event={"ID":"235550bc-ede9-4da9-a2d0-08253c0d3a29","Type":"ContainerDied","Data":"f208c4379c334de9ed3652411c2faa6a6a53d0ff48bfcfc47777fe4b76625c4c"} Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.036508 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-n6lz8" event={"ID":"668a330f-e46e-4be4-9d42-2a547988e82b","Type":"ContainerDied","Data":"6cf35149a85dfa0ad915c745505900f8f857c6be1e3c4a5157fa651c10046252"} Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.036532 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cf35149a85dfa0ad915c745505900f8f857c6be1e3c4a5157fa651c10046252" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.036090 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-n6lz8" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.036586 4858 scope.go:117] "RemoveContainer" containerID="94e0f61c9aa16fc8d8c0ece1da7390f3dec6b624a5a379d30b058db4ef3b516e" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.034435 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pmntb" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.036047 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-6qdtt" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.074673 4858 scope.go:117] "RemoveContainer" containerID="d411d2814a090cbe1078c467c0e3f16e55dbbc238e0be064f3e407364d02f674" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.090556 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pmntb"] Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.102953 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pmntb"] Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.131502 4858 scope.go:117] "RemoveContainer" containerID="05387fbc30d14850792220dd91943a894c2d2439ace70d0233f2abbd76d6eef9" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.136820 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-6qdtt"] Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.145501 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-6qdtt"] Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.303669 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-2cblc"] Feb 02 17:31:10 crc kubenswrapper[4858]: E0202 17:31:10.304144 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="907ec6b3-b751-400a-95ea-e69381ac7785" containerName="mariadb-account-create-update" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.304166 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="907ec6b3-b751-400a-95ea-e69381ac7785" containerName="mariadb-account-create-update" Feb 02 17:31:10 crc kubenswrapper[4858]: E0202 17:31:10.304181 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5350e65d-0f27-4ac0-9251-00ce22348491" containerName="dnsmasq-dns" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.304189 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5350e65d-0f27-4ac0-9251-00ce22348491" containerName="dnsmasq-dns" Feb 02 17:31:10 crc kubenswrapper[4858]: E0202 17:31:10.304199 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="668a330f-e46e-4be4-9d42-2a547988e82b" containerName="keystone-db-sync" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.304209 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="668a330f-e46e-4be4-9d42-2a547988e82b" containerName="keystone-db-sync" Feb 02 17:31:10 crc kubenswrapper[4858]: E0202 17:31:10.304217 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="193d2b9e-dc31-4d42-971b-ff706ff40bb1" containerName="extract-content" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.304226 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="193d2b9e-dc31-4d42-971b-ff706ff40bb1" containerName="extract-content" Feb 02 17:31:10 crc kubenswrapper[4858]: E0202 17:31:10.304242 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="235550bc-ede9-4da9-a2d0-08253c0d3a29" containerName="registry-server" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.304279 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="235550bc-ede9-4da9-a2d0-08253c0d3a29" containerName="registry-server" Feb 02 17:31:10 crc kubenswrapper[4858]: E0202 17:31:10.304296 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4449167-7d55-4675-9dd6-20094b472bd0" containerName="mariadb-database-create" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.304303 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4449167-7d55-4675-9dd6-20094b472bd0" containerName="mariadb-database-create" Feb 02 17:31:10 crc kubenswrapper[4858]: E0202 17:31:10.304319 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a319bc8-c36c-410d-a23b-2f0aa98fd592" containerName="init" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.304326 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a319bc8-c36c-410d-a23b-2f0aa98fd592" containerName="init" Feb 02 17:31:10 crc kubenswrapper[4858]: E0202 17:31:10.304338 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="193d2b9e-dc31-4d42-971b-ff706ff40bb1" containerName="registry-server" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.304346 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="193d2b9e-dc31-4d42-971b-ff706ff40bb1" containerName="registry-server" Feb 02 17:31:10 crc kubenswrapper[4858]: E0202 17:31:10.304363 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="235550bc-ede9-4da9-a2d0-08253c0d3a29" containerName="extract-utilities" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.304371 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="235550bc-ede9-4da9-a2d0-08253c0d3a29" containerName="extract-utilities" Feb 02 17:31:10 crc kubenswrapper[4858]: E0202 17:31:10.304383 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="509b5c9b-875d-410b-b427-d0ba51cf798c" containerName="mariadb-database-create" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.304390 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="509b5c9b-875d-410b-b427-d0ba51cf798c" containerName="mariadb-database-create" Feb 02 17:31:10 crc kubenswrapper[4858]: E0202 17:31:10.304407 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="158f29d9-d8c9-47ee-912c-05108d7bec02" containerName="mariadb-database-create" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.304414 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="158f29d9-d8c9-47ee-912c-05108d7bec02" containerName="mariadb-database-create" Feb 02 17:31:10 crc kubenswrapper[4858]: E0202 17:31:10.304426 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="235550bc-ede9-4da9-a2d0-08253c0d3a29" containerName="extract-content" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.304433 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="235550bc-ede9-4da9-a2d0-08253c0d3a29" containerName="extract-content" Feb 02 17:31:10 crc kubenswrapper[4858]: E0202 17:31:10.304443 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a319bc8-c36c-410d-a23b-2f0aa98fd592" containerName="dnsmasq-dns" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.304451 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a319bc8-c36c-410d-a23b-2f0aa98fd592" containerName="dnsmasq-dns" Feb 02 17:31:10 crc kubenswrapper[4858]: E0202 17:31:10.304464 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5350e65d-0f27-4ac0-9251-00ce22348491" containerName="init" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.304471 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5350e65d-0f27-4ac0-9251-00ce22348491" containerName="init" Feb 02 17:31:10 crc kubenswrapper[4858]: E0202 17:31:10.304484 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="193d2b9e-dc31-4d42-971b-ff706ff40bb1" containerName="extract-utilities" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.304491 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="193d2b9e-dc31-4d42-971b-ff706ff40bb1" containerName="extract-utilities" Feb 02 17:31:10 crc kubenswrapper[4858]: E0202 17:31:10.304503 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a82f861-3468-4057-851d-05836166f30b" containerName="mariadb-account-create-update" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.304511 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a82f861-3468-4057-851d-05836166f30b" containerName="mariadb-account-create-update" Feb 02 17:31:10 crc kubenswrapper[4858]: E0202 17:31:10.304521 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eb91469-c691-4b21-a5d3-e422d2d36cc3" containerName="mariadb-account-create-update" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.304530 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eb91469-c691-4b21-a5d3-e422d2d36cc3" containerName="mariadb-account-create-update" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.304731 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="235550bc-ede9-4da9-a2d0-08253c0d3a29" containerName="registry-server" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.304746 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4449167-7d55-4675-9dd6-20094b472bd0" containerName="mariadb-database-create" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.304758 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="509b5c9b-875d-410b-b427-d0ba51cf798c" containerName="mariadb-database-create" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.304774 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="668a330f-e46e-4be4-9d42-2a547988e82b" containerName="keystone-db-sync" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.304783 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a82f861-3468-4057-851d-05836166f30b" containerName="mariadb-account-create-update" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.304794 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eb91469-c691-4b21-a5d3-e422d2d36cc3" containerName="mariadb-account-create-update" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.304801 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="158f29d9-d8c9-47ee-912c-05108d7bec02" containerName="mariadb-database-create" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.304808 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="193d2b9e-dc31-4d42-971b-ff706ff40bb1" containerName="registry-server" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.304820 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a319bc8-c36c-410d-a23b-2f0aa98fd592" containerName="dnsmasq-dns" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.304835 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="907ec6b3-b751-400a-95ea-e69381ac7785" containerName="mariadb-account-create-update" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.304844 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="5350e65d-0f27-4ac0-9251-00ce22348491" containerName="dnsmasq-dns" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.305599 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2cblc" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.312490 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.312908 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.312915 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.314328 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7ztbs" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.314525 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.333817 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-r5xrf"] Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.340512 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-r5xrf" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.368411 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2cblc"] Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.372268 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74e45947-b64b-41c2-8b25-04a632777ca1-config\") pod \"dnsmasq-dns-847c4cc679-r5xrf\" (UID: \"74e45947-b64b-41c2-8b25-04a632777ca1\") " pod="openstack/dnsmasq-dns-847c4cc679-r5xrf" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.372509 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9a10b005-fba9-454e-b8a8-e7ffa96fc978-credential-keys\") pod \"keystone-bootstrap-2cblc\" (UID: \"9a10b005-fba9-454e-b8a8-e7ffa96fc978\") " pod="openstack/keystone-bootstrap-2cblc" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.372596 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mmx2\" (UniqueName: \"kubernetes.io/projected/74e45947-b64b-41c2-8b25-04a632777ca1-kube-api-access-9mmx2\") pod \"dnsmasq-dns-847c4cc679-r5xrf\" (UID: \"74e45947-b64b-41c2-8b25-04a632777ca1\") " pod="openstack/dnsmasq-dns-847c4cc679-r5xrf" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.372692 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74e45947-b64b-41c2-8b25-04a632777ca1-dns-svc\") pod \"dnsmasq-dns-847c4cc679-r5xrf\" (UID: \"74e45947-b64b-41c2-8b25-04a632777ca1\") " pod="openstack/dnsmasq-dns-847c4cc679-r5xrf" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.372787 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a10b005-fba9-454e-b8a8-e7ffa96fc978-config-data\") pod \"keystone-bootstrap-2cblc\" (UID: \"9a10b005-fba9-454e-b8a8-e7ffa96fc978\") " pod="openstack/keystone-bootstrap-2cblc" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.382265 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a10b005-fba9-454e-b8a8-e7ffa96fc978-combined-ca-bundle\") pod \"keystone-bootstrap-2cblc\" (UID: \"9a10b005-fba9-454e-b8a8-e7ffa96fc978\") " pod="openstack/keystone-bootstrap-2cblc" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.382374 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a10b005-fba9-454e-b8a8-e7ffa96fc978-scripts\") pod \"keystone-bootstrap-2cblc\" (UID: \"9a10b005-fba9-454e-b8a8-e7ffa96fc978\") " pod="openstack/keystone-bootstrap-2cblc" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.392664 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9a10b005-fba9-454e-b8a8-e7ffa96fc978-fernet-keys\") pod \"keystone-bootstrap-2cblc\" (UID: \"9a10b005-fba9-454e-b8a8-e7ffa96fc978\") " pod="openstack/keystone-bootstrap-2cblc" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.392765 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n24h8\" (UniqueName: \"kubernetes.io/projected/9a10b005-fba9-454e-b8a8-e7ffa96fc978-kube-api-access-n24h8\") pod \"keystone-bootstrap-2cblc\" (UID: \"9a10b005-fba9-454e-b8a8-e7ffa96fc978\") " pod="openstack/keystone-bootstrap-2cblc" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.393057 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74e45947-b64b-41c2-8b25-04a632777ca1-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-r5xrf\" (UID: \"74e45947-b64b-41c2-8b25-04a632777ca1\") " pod="openstack/dnsmasq-dns-847c4cc679-r5xrf" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.393153 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74e45947-b64b-41c2-8b25-04a632777ca1-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-r5xrf\" (UID: \"74e45947-b64b-41c2-8b25-04a632777ca1\") " pod="openstack/dnsmasq-dns-847c4cc679-r5xrf" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.393275 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/74e45947-b64b-41c2-8b25-04a632777ca1-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-r5xrf\" (UID: \"74e45947-b64b-41c2-8b25-04a632777ca1\") " pod="openstack/dnsmasq-dns-847c4cc679-r5xrf" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.411204 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-r5xrf"] Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.497598 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="235550bc-ede9-4da9-a2d0-08253c0d3a29" path="/var/lib/kubelet/pods/235550bc-ede9-4da9-a2d0-08253c0d3a29/volumes" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.499127 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5350e65d-0f27-4ac0-9251-00ce22348491" path="/var/lib/kubelet/pods/5350e65d-0f27-4ac0-9251-00ce22348491/volumes" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.499497 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/74e45947-b64b-41c2-8b25-04a632777ca1-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-r5xrf\" (UID: \"74e45947-b64b-41c2-8b25-04a632777ca1\") " pod="openstack/dnsmasq-dns-847c4cc679-r5xrf" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.499568 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74e45947-b64b-41c2-8b25-04a632777ca1-config\") pod \"dnsmasq-dns-847c4cc679-r5xrf\" (UID: \"74e45947-b64b-41c2-8b25-04a632777ca1\") " pod="openstack/dnsmasq-dns-847c4cc679-r5xrf" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.499627 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9a10b005-fba9-454e-b8a8-e7ffa96fc978-credential-keys\") pod \"keystone-bootstrap-2cblc\" (UID: \"9a10b005-fba9-454e-b8a8-e7ffa96fc978\") " pod="openstack/keystone-bootstrap-2cblc" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.499668 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mmx2\" (UniqueName: \"kubernetes.io/projected/74e45947-b64b-41c2-8b25-04a632777ca1-kube-api-access-9mmx2\") pod \"dnsmasq-dns-847c4cc679-r5xrf\" (UID: \"74e45947-b64b-41c2-8b25-04a632777ca1\") " pod="openstack/dnsmasq-dns-847c4cc679-r5xrf" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.499841 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74e45947-b64b-41c2-8b25-04a632777ca1-dns-svc\") pod \"dnsmasq-dns-847c4cc679-r5xrf\" (UID: \"74e45947-b64b-41c2-8b25-04a632777ca1\") " pod="openstack/dnsmasq-dns-847c4cc679-r5xrf" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.499912 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a10b005-fba9-454e-b8a8-e7ffa96fc978-config-data\") pod \"keystone-bootstrap-2cblc\" (UID: \"9a10b005-fba9-454e-b8a8-e7ffa96fc978\") " pod="openstack/keystone-bootstrap-2cblc" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.499970 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a10b005-fba9-454e-b8a8-e7ffa96fc978-combined-ca-bundle\") pod \"keystone-bootstrap-2cblc\" (UID: \"9a10b005-fba9-454e-b8a8-e7ffa96fc978\") " pod="openstack/keystone-bootstrap-2cblc" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.500020 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a10b005-fba9-454e-b8a8-e7ffa96fc978-scripts\") pod \"keystone-bootstrap-2cblc\" (UID: \"9a10b005-fba9-454e-b8a8-e7ffa96fc978\") " pod="openstack/keystone-bootstrap-2cblc" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.500077 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9a10b005-fba9-454e-b8a8-e7ffa96fc978-fernet-keys\") pod \"keystone-bootstrap-2cblc\" (UID: \"9a10b005-fba9-454e-b8a8-e7ffa96fc978\") " pod="openstack/keystone-bootstrap-2cblc" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.500106 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n24h8\" (UniqueName: \"kubernetes.io/projected/9a10b005-fba9-454e-b8a8-e7ffa96fc978-kube-api-access-n24h8\") pod \"keystone-bootstrap-2cblc\" (UID: \"9a10b005-fba9-454e-b8a8-e7ffa96fc978\") " pod="openstack/keystone-bootstrap-2cblc" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.500233 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74e45947-b64b-41c2-8b25-04a632777ca1-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-r5xrf\" (UID: \"74e45947-b64b-41c2-8b25-04a632777ca1\") " pod="openstack/dnsmasq-dns-847c4cc679-r5xrf" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.500284 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74e45947-b64b-41c2-8b25-04a632777ca1-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-r5xrf\" (UID: \"74e45947-b64b-41c2-8b25-04a632777ca1\") " pod="openstack/dnsmasq-dns-847c4cc679-r5xrf" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.501887 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74e45947-b64b-41c2-8b25-04a632777ca1-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-r5xrf\" (UID: \"74e45947-b64b-41c2-8b25-04a632777ca1\") " pod="openstack/dnsmasq-dns-847c4cc679-r5xrf" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.502565 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74e45947-b64b-41c2-8b25-04a632777ca1-dns-svc\") pod \"dnsmasq-dns-847c4cc679-r5xrf\" (UID: \"74e45947-b64b-41c2-8b25-04a632777ca1\") " pod="openstack/dnsmasq-dns-847c4cc679-r5xrf" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.503841 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74e45947-b64b-41c2-8b25-04a632777ca1-config\") pod \"dnsmasq-dns-847c4cc679-r5xrf\" (UID: \"74e45947-b64b-41c2-8b25-04a632777ca1\") " pod="openstack/dnsmasq-dns-847c4cc679-r5xrf" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.504837 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/74e45947-b64b-41c2-8b25-04a632777ca1-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-r5xrf\" (UID: \"74e45947-b64b-41c2-8b25-04a632777ca1\") " pod="openstack/dnsmasq-dns-847c4cc679-r5xrf" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.505726 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74e45947-b64b-41c2-8b25-04a632777ca1-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-r5xrf\" (UID: \"74e45947-b64b-41c2-8b25-04a632777ca1\") " pod="openstack/dnsmasq-dns-847c4cc679-r5xrf" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.510183 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a10b005-fba9-454e-b8a8-e7ffa96fc978-config-data\") pod \"keystone-bootstrap-2cblc\" (UID: \"9a10b005-fba9-454e-b8a8-e7ffa96fc978\") " pod="openstack/keystone-bootstrap-2cblc" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.545403 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a10b005-fba9-454e-b8a8-e7ffa96fc978-combined-ca-bundle\") pod \"keystone-bootstrap-2cblc\" (UID: \"9a10b005-fba9-454e-b8a8-e7ffa96fc978\") " pod="openstack/keystone-bootstrap-2cblc" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.545748 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a10b005-fba9-454e-b8a8-e7ffa96fc978-scripts\") pod \"keystone-bootstrap-2cblc\" (UID: \"9a10b005-fba9-454e-b8a8-e7ffa96fc978\") " pod="openstack/keystone-bootstrap-2cblc" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.546357 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9a10b005-fba9-454e-b8a8-e7ffa96fc978-fernet-keys\") pod \"keystone-bootstrap-2cblc\" (UID: \"9a10b005-fba9-454e-b8a8-e7ffa96fc978\") " pod="openstack/keystone-bootstrap-2cblc" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.558078 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9a10b005-fba9-454e-b8a8-e7ffa96fc978-credential-keys\") pod \"keystone-bootstrap-2cblc\" (UID: \"9a10b005-fba9-454e-b8a8-e7ffa96fc978\") " pod="openstack/keystone-bootstrap-2cblc" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.579701 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n24h8\" (UniqueName: \"kubernetes.io/projected/9a10b005-fba9-454e-b8a8-e7ffa96fc978-kube-api-access-n24h8\") pod \"keystone-bootstrap-2cblc\" (UID: \"9a10b005-fba9-454e-b8a8-e7ffa96fc978\") " pod="openstack/keystone-bootstrap-2cblc" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.593714 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mmx2\" (UniqueName: \"kubernetes.io/projected/74e45947-b64b-41c2-8b25-04a632777ca1-kube-api-access-9mmx2\") pod \"dnsmasq-dns-847c4cc679-r5xrf\" (UID: \"74e45947-b64b-41c2-8b25-04a632777ca1\") " pod="openstack/dnsmasq-dns-847c4cc679-r5xrf" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.656464 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2cblc" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.667104 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-6xt8q"] Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.668163 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6xt8q" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.673614 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-r5xrf" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.678284 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.678440 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.678597 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-vxpsf" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.706619 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d5a9fadc-338f-44bb-8ebd-bc4fe01972bf-etc-machine-id\") pod \"cinder-db-sync-6xt8q\" (UID: \"d5a9fadc-338f-44bb-8ebd-bc4fe01972bf\") " pod="openstack/cinder-db-sync-6xt8q" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.706717 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5a9fadc-338f-44bb-8ebd-bc4fe01972bf-combined-ca-bundle\") pod \"cinder-db-sync-6xt8q\" (UID: \"d5a9fadc-338f-44bb-8ebd-bc4fe01972bf\") " pod="openstack/cinder-db-sync-6xt8q" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.706744 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5a9fadc-338f-44bb-8ebd-bc4fe01972bf-config-data\") pod \"cinder-db-sync-6xt8q\" (UID: \"d5a9fadc-338f-44bb-8ebd-bc4fe01972bf\") " pod="openstack/cinder-db-sync-6xt8q" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.706762 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5a9fadc-338f-44bb-8ebd-bc4fe01972bf-scripts\") pod \"cinder-db-sync-6xt8q\" (UID: \"d5a9fadc-338f-44bb-8ebd-bc4fe01972bf\") " pod="openstack/cinder-db-sync-6xt8q" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.706801 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d5a9fadc-338f-44bb-8ebd-bc4fe01972bf-db-sync-config-data\") pod \"cinder-db-sync-6xt8q\" (UID: \"d5a9fadc-338f-44bb-8ebd-bc4fe01972bf\") " pod="openstack/cinder-db-sync-6xt8q" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.706827 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjfgs\" (UniqueName: \"kubernetes.io/projected/d5a9fadc-338f-44bb-8ebd-bc4fe01972bf-kube-api-access-fjfgs\") pod \"cinder-db-sync-6xt8q\" (UID: \"d5a9fadc-338f-44bb-8ebd-bc4fe01972bf\") " pod="openstack/cinder-db-sync-6xt8q" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.714222 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-799987499c-xpcx4"] Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.716517 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-799987499c-xpcx4" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.728104 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.728310 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.728561 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-j6jpd" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.728712 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.730690 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-6xt8q"] Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.789672 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-799987499c-xpcx4"] Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.821206 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2z72\" (UniqueName: \"kubernetes.io/projected/9f53e6ec-a84c-4b0f-bd1b-20acda3a9451-kube-api-access-k2z72\") pod \"horizon-799987499c-xpcx4\" (UID: \"9f53e6ec-a84c-4b0f-bd1b-20acda3a9451\") " pod="openstack/horizon-799987499c-xpcx4" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.821263 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5a9fadc-338f-44bb-8ebd-bc4fe01972bf-combined-ca-bundle\") pod \"cinder-db-sync-6xt8q\" (UID: \"d5a9fadc-338f-44bb-8ebd-bc4fe01972bf\") " pod="openstack/cinder-db-sync-6xt8q" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.821293 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5a9fadc-338f-44bb-8ebd-bc4fe01972bf-config-data\") pod \"cinder-db-sync-6xt8q\" (UID: \"d5a9fadc-338f-44bb-8ebd-bc4fe01972bf\") " pod="openstack/cinder-db-sync-6xt8q" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.821313 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5a9fadc-338f-44bb-8ebd-bc4fe01972bf-scripts\") pod \"cinder-db-sync-6xt8q\" (UID: \"d5a9fadc-338f-44bb-8ebd-bc4fe01972bf\") " pod="openstack/cinder-db-sync-6xt8q" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.821354 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d5a9fadc-338f-44bb-8ebd-bc4fe01972bf-db-sync-config-data\") pod \"cinder-db-sync-6xt8q\" (UID: \"d5a9fadc-338f-44bb-8ebd-bc4fe01972bf\") " pod="openstack/cinder-db-sync-6xt8q" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.821377 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjfgs\" (UniqueName: \"kubernetes.io/projected/d5a9fadc-338f-44bb-8ebd-bc4fe01972bf-kube-api-access-fjfgs\") pod \"cinder-db-sync-6xt8q\" (UID: \"d5a9fadc-338f-44bb-8ebd-bc4fe01972bf\") " pod="openstack/cinder-db-sync-6xt8q" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.821408 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9f53e6ec-a84c-4b0f-bd1b-20acda3a9451-scripts\") pod \"horizon-799987499c-xpcx4\" (UID: \"9f53e6ec-a84c-4b0f-bd1b-20acda3a9451\") " pod="openstack/horizon-799987499c-xpcx4" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.821435 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d5a9fadc-338f-44bb-8ebd-bc4fe01972bf-etc-machine-id\") pod \"cinder-db-sync-6xt8q\" (UID: \"d5a9fadc-338f-44bb-8ebd-bc4fe01972bf\") " pod="openstack/cinder-db-sync-6xt8q" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.821455 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f53e6ec-a84c-4b0f-bd1b-20acda3a9451-logs\") pod \"horizon-799987499c-xpcx4\" (UID: \"9f53e6ec-a84c-4b0f-bd1b-20acda3a9451\") " pod="openstack/horizon-799987499c-xpcx4" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.821470 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9f53e6ec-a84c-4b0f-bd1b-20acda3a9451-config-data\") pod \"horizon-799987499c-xpcx4\" (UID: \"9f53e6ec-a84c-4b0f-bd1b-20acda3a9451\") " pod="openstack/horizon-799987499c-xpcx4" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.821526 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9f53e6ec-a84c-4b0f-bd1b-20acda3a9451-horizon-secret-key\") pod \"horizon-799987499c-xpcx4\" (UID: \"9f53e6ec-a84c-4b0f-bd1b-20acda3a9451\") " pod="openstack/horizon-799987499c-xpcx4" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.823112 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d5a9fadc-338f-44bb-8ebd-bc4fe01972bf-etc-machine-id\") pod \"cinder-db-sync-6xt8q\" (UID: \"d5a9fadc-338f-44bb-8ebd-bc4fe01972bf\") " pod="openstack/cinder-db-sync-6xt8q" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.831409 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d5a9fadc-338f-44bb-8ebd-bc4fe01972bf-db-sync-config-data\") pod \"cinder-db-sync-6xt8q\" (UID: \"d5a9fadc-338f-44bb-8ebd-bc4fe01972bf\") " pod="openstack/cinder-db-sync-6xt8q" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.845698 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5a9fadc-338f-44bb-8ebd-bc4fe01972bf-combined-ca-bundle\") pod \"cinder-db-sync-6xt8q\" (UID: \"d5a9fadc-338f-44bb-8ebd-bc4fe01972bf\") " pod="openstack/cinder-db-sync-6xt8q" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.846643 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5a9fadc-338f-44bb-8ebd-bc4fe01972bf-config-data\") pod \"cinder-db-sync-6xt8q\" (UID: \"d5a9fadc-338f-44bb-8ebd-bc4fe01972bf\") " pod="openstack/cinder-db-sync-6xt8q" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.849996 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjfgs\" (UniqueName: \"kubernetes.io/projected/d5a9fadc-338f-44bb-8ebd-bc4fe01972bf-kube-api-access-fjfgs\") pod \"cinder-db-sync-6xt8q\" (UID: \"d5a9fadc-338f-44bb-8ebd-bc4fe01972bf\") " pod="openstack/cinder-db-sync-6xt8q" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.850269 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.852659 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.862058 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.862522 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.868426 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5a9fadc-338f-44bb-8ebd-bc4fe01972bf-scripts\") pod \"cinder-db-sync-6xt8q\" (UID: \"d5a9fadc-338f-44bb-8ebd-bc4fe01972bf\") " pod="openstack/cinder-db-sync-6xt8q" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.901905 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6xt8q" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.906756 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-sgmhl"] Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.908075 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sgmhl" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.922284 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.922536 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.922654 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-5s7k7" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.924899 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.936345 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20f4c2e0-6bac-4c5a-affd-48f2d8301111-config-data\") pod \"ceilometer-0\" (UID: \"20f4c2e0-6bac-4c5a-affd-48f2d8301111\") " pod="openstack/ceilometer-0" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.936471 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9f53e6ec-a84c-4b0f-bd1b-20acda3a9451-scripts\") pod \"horizon-799987499c-xpcx4\" (UID: \"9f53e6ec-a84c-4b0f-bd1b-20acda3a9451\") " pod="openstack/horizon-799987499c-xpcx4" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.936505 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f4c2e0-6bac-4c5a-affd-48f2d8301111-run-httpd\") pod \"ceilometer-0\" (UID: \"20f4c2e0-6bac-4c5a-affd-48f2d8301111\") " pod="openstack/ceilometer-0" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.936587 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s88n\" (UniqueName: \"kubernetes.io/projected/20f4c2e0-6bac-4c5a-affd-48f2d8301111-kube-api-access-8s88n\") pod \"ceilometer-0\" (UID: \"20f4c2e0-6bac-4c5a-affd-48f2d8301111\") " pod="openstack/ceilometer-0" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.936640 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f53e6ec-a84c-4b0f-bd1b-20acda3a9451-logs\") pod \"horizon-799987499c-xpcx4\" (UID: \"9f53e6ec-a84c-4b0f-bd1b-20acda3a9451\") " pod="openstack/horizon-799987499c-xpcx4" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.936664 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9f53e6ec-a84c-4b0f-bd1b-20acda3a9451-config-data\") pod \"horizon-799987499c-xpcx4\" (UID: \"9f53e6ec-a84c-4b0f-bd1b-20acda3a9451\") " pod="openstack/horizon-799987499c-xpcx4" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.936864 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9f53e6ec-a84c-4b0f-bd1b-20acda3a9451-horizon-secret-key\") pod \"horizon-799987499c-xpcx4\" (UID: \"9f53e6ec-a84c-4b0f-bd1b-20acda3a9451\") " pod="openstack/horizon-799987499c-xpcx4" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.936929 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f4c2e0-6bac-4c5a-affd-48f2d8301111-log-httpd\") pod \"ceilometer-0\" (UID: \"20f4c2e0-6bac-4c5a-affd-48f2d8301111\") " pod="openstack/ceilometer-0" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.936953 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2z72\" (UniqueName: \"kubernetes.io/projected/9f53e6ec-a84c-4b0f-bd1b-20acda3a9451-kube-api-access-k2z72\") pod \"horizon-799987499c-xpcx4\" (UID: \"9f53e6ec-a84c-4b0f-bd1b-20acda3a9451\") " pod="openstack/horizon-799987499c-xpcx4" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.937065 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20f4c2e0-6bac-4c5a-affd-48f2d8301111-scripts\") pod \"ceilometer-0\" (UID: \"20f4c2e0-6bac-4c5a-affd-48f2d8301111\") " pod="openstack/ceilometer-0" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.937160 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20f4c2e0-6bac-4c5a-affd-48f2d8301111-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20f4c2e0-6bac-4c5a-affd-48f2d8301111\") " pod="openstack/ceilometer-0" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.937206 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20f4c2e0-6bac-4c5a-affd-48f2d8301111-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20f4c2e0-6bac-4c5a-affd-48f2d8301111\") " pod="openstack/ceilometer-0" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.937595 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f53e6ec-a84c-4b0f-bd1b-20acda3a9451-logs\") pod \"horizon-799987499c-xpcx4\" (UID: \"9f53e6ec-a84c-4b0f-bd1b-20acda3a9451\") " pod="openstack/horizon-799987499c-xpcx4" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.940532 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9f53e6ec-a84c-4b0f-bd1b-20acda3a9451-scripts\") pod \"horizon-799987499c-xpcx4\" (UID: \"9f53e6ec-a84c-4b0f-bd1b-20acda3a9451\") " pod="openstack/horizon-799987499c-xpcx4" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.940905 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9f53e6ec-a84c-4b0f-bd1b-20acda3a9451-config-data\") pod \"horizon-799987499c-xpcx4\" (UID: \"9f53e6ec-a84c-4b0f-bd1b-20acda3a9451\") " pod="openstack/horizon-799987499c-xpcx4" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.952017 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9f53e6ec-a84c-4b0f-bd1b-20acda3a9451-horizon-secret-key\") pod \"horizon-799987499c-xpcx4\" (UID: \"9f53e6ec-a84c-4b0f-bd1b-20acda3a9451\") " pod="openstack/horizon-799987499c-xpcx4" Feb 02 17:31:10 crc kubenswrapper[4858]: I0202 17:31:10.981375 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2z72\" (UniqueName: \"kubernetes.io/projected/9f53e6ec-a84c-4b0f-bd1b-20acda3a9451-kube-api-access-k2z72\") pod \"horizon-799987499c-xpcx4\" (UID: \"9f53e6ec-a84c-4b0f-bd1b-20acda3a9451\") " pod="openstack/horizon-799987499c-xpcx4" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.004095 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-sgmhl"] Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.055144 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmzjf\" (UniqueName: \"kubernetes.io/projected/d5f00171-8005-4f58-a90a-5f0be6c6a48f-kube-api-access-hmzjf\") pod \"neutron-db-sync-sgmhl\" (UID: \"d5f00171-8005-4f58-a90a-5f0be6c6a48f\") " pod="openstack/neutron-db-sync-sgmhl" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.055208 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f00171-8005-4f58-a90a-5f0be6c6a48f-combined-ca-bundle\") pod \"neutron-db-sync-sgmhl\" (UID: \"d5f00171-8005-4f58-a90a-5f0be6c6a48f\") " pod="openstack/neutron-db-sync-sgmhl" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.057478 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f4c2e0-6bac-4c5a-affd-48f2d8301111-log-httpd\") pod \"ceilometer-0\" (UID: \"20f4c2e0-6bac-4c5a-affd-48f2d8301111\") " pod="openstack/ceilometer-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.057599 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20f4c2e0-6bac-4c5a-affd-48f2d8301111-scripts\") pod \"ceilometer-0\" (UID: \"20f4c2e0-6bac-4c5a-affd-48f2d8301111\") " pod="openstack/ceilometer-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.057652 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20f4c2e0-6bac-4c5a-affd-48f2d8301111-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20f4c2e0-6bac-4c5a-affd-48f2d8301111\") " pod="openstack/ceilometer-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.057676 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20f4c2e0-6bac-4c5a-affd-48f2d8301111-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20f4c2e0-6bac-4c5a-affd-48f2d8301111\") " pod="openstack/ceilometer-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.057750 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20f4c2e0-6bac-4c5a-affd-48f2d8301111-config-data\") pod \"ceilometer-0\" (UID: \"20f4c2e0-6bac-4c5a-affd-48f2d8301111\") " pod="openstack/ceilometer-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.057788 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d5f00171-8005-4f58-a90a-5f0be6c6a48f-config\") pod \"neutron-db-sync-sgmhl\" (UID: \"d5f00171-8005-4f58-a90a-5f0be6c6a48f\") " pod="openstack/neutron-db-sync-sgmhl" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.057843 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f4c2e0-6bac-4c5a-affd-48f2d8301111-run-httpd\") pod \"ceilometer-0\" (UID: \"20f4c2e0-6bac-4c5a-affd-48f2d8301111\") " pod="openstack/ceilometer-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.057886 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s88n\" (UniqueName: \"kubernetes.io/projected/20f4c2e0-6bac-4c5a-affd-48f2d8301111-kube-api-access-8s88n\") pod \"ceilometer-0\" (UID: \"20f4c2e0-6bac-4c5a-affd-48f2d8301111\") " pod="openstack/ceilometer-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.058658 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f4c2e0-6bac-4c5a-affd-48f2d8301111-log-httpd\") pod \"ceilometer-0\" (UID: \"20f4c2e0-6bac-4c5a-affd-48f2d8301111\") " pod="openstack/ceilometer-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.060018 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f4c2e0-6bac-4c5a-affd-48f2d8301111-run-httpd\") pod \"ceilometer-0\" (UID: \"20f4c2e0-6bac-4c5a-affd-48f2d8301111\") " pod="openstack/ceilometer-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.069338 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20f4c2e0-6bac-4c5a-affd-48f2d8301111-config-data\") pod \"ceilometer-0\" (UID: \"20f4c2e0-6bac-4c5a-affd-48f2d8301111\") " pod="openstack/ceilometer-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.070544 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20f4c2e0-6bac-4c5a-affd-48f2d8301111-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20f4c2e0-6bac-4c5a-affd-48f2d8301111\") " pod="openstack/ceilometer-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.072082 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20f4c2e0-6bac-4c5a-affd-48f2d8301111-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20f4c2e0-6bac-4c5a-affd-48f2d8301111\") " pod="openstack/ceilometer-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.077503 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20f4c2e0-6bac-4c5a-affd-48f2d8301111-scripts\") pod \"ceilometer-0\" (UID: \"20f4c2e0-6bac-4c5a-affd-48f2d8301111\") " pod="openstack/ceilometer-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.090368 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s88n\" (UniqueName: \"kubernetes.io/projected/20f4c2e0-6bac-4c5a-affd-48f2d8301111-kube-api-access-8s88n\") pod \"ceilometer-0\" (UID: \"20f4c2e0-6bac-4c5a-affd-48f2d8301111\") " pod="openstack/ceilometer-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.115216 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-9b7bf94f7-6x2q4"] Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.116593 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9b7bf94f7-6x2q4" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.121267 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-r5xrf"] Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.146314 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-9b7bf94f7-6x2q4"] Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.160901 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmzjf\" (UniqueName: \"kubernetes.io/projected/d5f00171-8005-4f58-a90a-5f0be6c6a48f-kube-api-access-hmzjf\") pod \"neutron-db-sync-sgmhl\" (UID: \"d5f00171-8005-4f58-a90a-5f0be6c6a48f\") " pod="openstack/neutron-db-sync-sgmhl" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.160968 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f00171-8005-4f58-a90a-5f0be6c6a48f-combined-ca-bundle\") pod \"neutron-db-sync-sgmhl\" (UID: \"d5f00171-8005-4f58-a90a-5f0be6c6a48f\") " pod="openstack/neutron-db-sync-sgmhl" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.161109 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d5f00171-8005-4f58-a90a-5f0be6c6a48f-config\") pod \"neutron-db-sync-sgmhl\" (UID: \"d5f00171-8005-4f58-a90a-5f0be6c6a48f\") " pod="openstack/neutron-db-sync-sgmhl" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.167279 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d5f00171-8005-4f58-a90a-5f0be6c6a48f-config\") pod \"neutron-db-sync-sgmhl\" (UID: \"d5f00171-8005-4f58-a90a-5f0be6c6a48f\") " pod="openstack/neutron-db-sync-sgmhl" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.170463 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f00171-8005-4f58-a90a-5f0be6c6a48f-combined-ca-bundle\") pod \"neutron-db-sync-sgmhl\" (UID: \"d5f00171-8005-4f58-a90a-5f0be6c6a48f\") " pod="openstack/neutron-db-sync-sgmhl" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.179837 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-x2rtg"] Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.181236 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-x2rtg" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.181679 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmzjf\" (UniqueName: \"kubernetes.io/projected/d5f00171-8005-4f58-a90a-5f0be6c6a48f-kube-api-access-hmzjf\") pod \"neutron-db-sync-sgmhl\" (UID: \"d5f00171-8005-4f58-a90a-5f0be6c6a48f\") " pod="openstack/neutron-db-sync-sgmhl" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.190200 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.206708 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.212908 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-dfr7p" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.213204 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.213321 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.214490 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.218245 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-mdq86"] Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.223134 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-mdq86" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.226575 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-5h5d7" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.226914 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.236335 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-799987499c-xpcx4" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.252884 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-gfbpg"] Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.255881 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-gfbpg" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.263683 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a85b3eb5-0945-45b7-875b-e20c2c0e29f7-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-x2rtg\" (UID: \"a85b3eb5-0945-45b7-875b-e20c2c0e29f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-x2rtg" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.263783 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwsx5\" (UniqueName: \"kubernetes.io/projected/b64a755d-846e-4d73-9e25-8dbe3bb5c30f-kube-api-access-bwsx5\") pod \"horizon-9b7bf94f7-6x2q4\" (UID: \"b64a755d-846e-4d73-9e25-8dbe3bb5c30f\") " pod="openstack/horizon-9b7bf94f7-6x2q4" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.263815 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b64a755d-846e-4d73-9e25-8dbe3bb5c30f-horizon-secret-key\") pod \"horizon-9b7bf94f7-6x2q4\" (UID: \"b64a755d-846e-4d73-9e25-8dbe3bb5c30f\") " pod="openstack/horizon-9b7bf94f7-6x2q4" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.263852 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a85b3eb5-0945-45b7-875b-e20c2c0e29f7-config\") pod \"dnsmasq-dns-785d8bcb8c-x2rtg\" (UID: \"a85b3eb5-0945-45b7-875b-e20c2c0e29f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-x2rtg" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.263880 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a85b3eb5-0945-45b7-875b-e20c2c0e29f7-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-x2rtg\" (UID: \"a85b3eb5-0945-45b7-875b-e20c2c0e29f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-x2rtg" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.263902 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b64a755d-846e-4d73-9e25-8dbe3bb5c30f-logs\") pod \"horizon-9b7bf94f7-6x2q4\" (UID: \"b64a755d-846e-4d73-9e25-8dbe3bb5c30f\") " pod="openstack/horizon-9b7bf94f7-6x2q4" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.263965 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a85b3eb5-0945-45b7-875b-e20c2c0e29f7-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-x2rtg\" (UID: \"a85b3eb5-0945-45b7-875b-e20c2c0e29f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-x2rtg" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.264053 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b64a755d-846e-4d73-9e25-8dbe3bb5c30f-scripts\") pod \"horizon-9b7bf94f7-6x2q4\" (UID: \"b64a755d-846e-4d73-9e25-8dbe3bb5c30f\") " pod="openstack/horizon-9b7bf94f7-6x2q4" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.264084 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b64a755d-846e-4d73-9e25-8dbe3bb5c30f-config-data\") pod \"horizon-9b7bf94f7-6x2q4\" (UID: \"b64a755d-846e-4d73-9e25-8dbe3bb5c30f\") " pod="openstack/horizon-9b7bf94f7-6x2q4" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.264105 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnhgt\" (UniqueName: \"kubernetes.io/projected/a85b3eb5-0945-45b7-875b-e20c2c0e29f7-kube-api-access-bnhgt\") pod \"dnsmasq-dns-785d8bcb8c-x2rtg\" (UID: \"a85b3eb5-0945-45b7-875b-e20c2c0e29f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-x2rtg" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.264242 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a85b3eb5-0945-45b7-875b-e20c2c0e29f7-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-x2rtg\" (UID: \"a85b3eb5-0945-45b7-875b-e20c2c0e29f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-x2rtg" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.270098 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.270764 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-pt5l4" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.271441 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.271967 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.282593 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-x2rtg"] Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.296127 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-mdq86"] Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.298346 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sgmhl" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.316650 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.324922 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-gfbpg"] Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.374108 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56da7ca5-acf2-4372-9e48-20b829275727-combined-ca-bundle\") pod \"barbican-db-sync-mdq86\" (UID: \"56da7ca5-acf2-4372-9e48-20b829275727\") " pod="openstack/barbican-db-sync-mdq86" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.374257 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1355b1c-35d4-42d0-8780-7e01dd0b7a8d-scripts\") pod \"placement-db-sync-gfbpg\" (UID: \"e1355b1c-35d4-42d0-8780-7e01dd0b7a8d\") " pod="openstack/placement-db-sync-gfbpg" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.374317 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a85b3eb5-0945-45b7-875b-e20c2c0e29f7-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-x2rtg\" (UID: \"a85b3eb5-0945-45b7-875b-e20c2c0e29f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-x2rtg" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.374375 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqx4b\" (UniqueName: \"kubernetes.io/projected/56da7ca5-acf2-4372-9e48-20b829275727-kube-api-access-cqx4b\") pod \"barbican-db-sync-mdq86\" (UID: \"56da7ca5-acf2-4372-9e48-20b829275727\") " pod="openstack/barbican-db-sync-mdq86" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.374418 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fcea235-9da5-4a12-a312-83243be556bd-scripts\") pod \"glance-default-external-api-0\" (UID: \"6fcea235-9da5-4a12-a312-83243be556bd\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.374449 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a85b3eb5-0945-45b7-875b-e20c2c0e29f7-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-x2rtg\" (UID: \"a85b3eb5-0945-45b7-875b-e20c2c0e29f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-x2rtg" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.374474 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1355b1c-35d4-42d0-8780-7e01dd0b7a8d-combined-ca-bundle\") pod \"placement-db-sync-gfbpg\" (UID: \"e1355b1c-35d4-42d0-8780-7e01dd0b7a8d\") " pod="openstack/placement-db-sync-gfbpg" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.374517 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1355b1c-35d4-42d0-8780-7e01dd0b7a8d-config-data\") pod \"placement-db-sync-gfbpg\" (UID: \"e1355b1c-35d4-42d0-8780-7e01dd0b7a8d\") " pod="openstack/placement-db-sync-gfbpg" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.374589 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fcea235-9da5-4a12-a312-83243be556bd-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6fcea235-9da5-4a12-a312-83243be556bd\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.374623 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1355b1c-35d4-42d0-8780-7e01dd0b7a8d-logs\") pod \"placement-db-sync-gfbpg\" (UID: \"e1355b1c-35d4-42d0-8780-7e01dd0b7a8d\") " pod="openstack/placement-db-sync-gfbpg" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.374710 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b64a755d-846e-4d73-9e25-8dbe3bb5c30f-horizon-secret-key\") pod \"horizon-9b7bf94f7-6x2q4\" (UID: \"b64a755d-846e-4d73-9e25-8dbe3bb5c30f\") " pod="openstack/horizon-9b7bf94f7-6x2q4" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.374731 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwsx5\" (UniqueName: \"kubernetes.io/projected/b64a755d-846e-4d73-9e25-8dbe3bb5c30f-kube-api-access-bwsx5\") pod \"horizon-9b7bf94f7-6x2q4\" (UID: \"b64a755d-846e-4d73-9e25-8dbe3bb5c30f\") " pod="openstack/horizon-9b7bf94f7-6x2q4" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.374763 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fcea235-9da5-4a12-a312-83243be556bd-config-data\") pod \"glance-default-external-api-0\" (UID: \"6fcea235-9da5-4a12-a312-83243be556bd\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.374830 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a85b3eb5-0945-45b7-875b-e20c2c0e29f7-config\") pod \"dnsmasq-dns-785d8bcb8c-x2rtg\" (UID: \"a85b3eb5-0945-45b7-875b-e20c2c0e29f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-x2rtg" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.375230 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a85b3eb5-0945-45b7-875b-e20c2c0e29f7-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-x2rtg\" (UID: \"a85b3eb5-0945-45b7-875b-e20c2c0e29f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-x2rtg" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.375237 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xx6t\" (UniqueName: \"kubernetes.io/projected/e1355b1c-35d4-42d0-8780-7e01dd0b7a8d-kube-api-access-2xx6t\") pod \"placement-db-sync-gfbpg\" (UID: \"e1355b1c-35d4-42d0-8780-7e01dd0b7a8d\") " pod="openstack/placement-db-sync-gfbpg" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.375536 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a85b3eb5-0945-45b7-875b-e20c2c0e29f7-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-x2rtg\" (UID: \"a85b3eb5-0945-45b7-875b-e20c2c0e29f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-x2rtg" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.375584 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b64a755d-846e-4d73-9e25-8dbe3bb5c30f-logs\") pod \"horizon-9b7bf94f7-6x2q4\" (UID: \"b64a755d-846e-4d73-9e25-8dbe3bb5c30f\") " pod="openstack/horizon-9b7bf94f7-6x2q4" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.375647 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/56da7ca5-acf2-4372-9e48-20b829275727-db-sync-config-data\") pod \"barbican-db-sync-mdq86\" (UID: \"56da7ca5-acf2-4372-9e48-20b829275727\") " pod="openstack/barbican-db-sync-mdq86" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.375689 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"6fcea235-9da5-4a12-a312-83243be556bd\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.375715 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fcea235-9da5-4a12-a312-83243be556bd-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6fcea235-9da5-4a12-a312-83243be556bd\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.375759 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6fcea235-9da5-4a12-a312-83243be556bd-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6fcea235-9da5-4a12-a312-83243be556bd\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.375789 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp4gm\" (UniqueName: \"kubernetes.io/projected/6fcea235-9da5-4a12-a312-83243be556bd-kube-api-access-sp4gm\") pod \"glance-default-external-api-0\" (UID: \"6fcea235-9da5-4a12-a312-83243be556bd\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.375840 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a85b3eb5-0945-45b7-875b-e20c2c0e29f7-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-x2rtg\" (UID: \"a85b3eb5-0945-45b7-875b-e20c2c0e29f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-x2rtg" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.376120 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a85b3eb5-0945-45b7-875b-e20c2c0e29f7-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-x2rtg\" (UID: \"a85b3eb5-0945-45b7-875b-e20c2c0e29f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-x2rtg" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.376139 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a85b3eb5-0945-45b7-875b-e20c2c0e29f7-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-x2rtg\" (UID: \"a85b3eb5-0945-45b7-875b-e20c2c0e29f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-x2rtg" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.376924 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b64a755d-846e-4d73-9e25-8dbe3bb5c30f-scripts\") pod \"horizon-9b7bf94f7-6x2q4\" (UID: \"b64a755d-846e-4d73-9e25-8dbe3bb5c30f\") " pod="openstack/horizon-9b7bf94f7-6x2q4" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.377018 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b64a755d-846e-4d73-9e25-8dbe3bb5c30f-config-data\") pod \"horizon-9b7bf94f7-6x2q4\" (UID: \"b64a755d-846e-4d73-9e25-8dbe3bb5c30f\") " pod="openstack/horizon-9b7bf94f7-6x2q4" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.377053 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnhgt\" (UniqueName: \"kubernetes.io/projected/a85b3eb5-0945-45b7-875b-e20c2c0e29f7-kube-api-access-bnhgt\") pod \"dnsmasq-dns-785d8bcb8c-x2rtg\" (UID: \"a85b3eb5-0945-45b7-875b-e20c2c0e29f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-x2rtg" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.377085 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a85b3eb5-0945-45b7-875b-e20c2c0e29f7-config\") pod \"dnsmasq-dns-785d8bcb8c-x2rtg\" (UID: \"a85b3eb5-0945-45b7-875b-e20c2c0e29f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-x2rtg" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.377112 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fcea235-9da5-4a12-a312-83243be556bd-logs\") pod \"glance-default-external-api-0\" (UID: \"6fcea235-9da5-4a12-a312-83243be556bd\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.378237 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b64a755d-846e-4d73-9e25-8dbe3bb5c30f-logs\") pod \"horizon-9b7bf94f7-6x2q4\" (UID: \"b64a755d-846e-4d73-9e25-8dbe3bb5c30f\") " pod="openstack/horizon-9b7bf94f7-6x2q4" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.378633 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a85b3eb5-0945-45b7-875b-e20c2c0e29f7-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-x2rtg\" (UID: \"a85b3eb5-0945-45b7-875b-e20c2c0e29f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-x2rtg" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.379526 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b64a755d-846e-4d73-9e25-8dbe3bb5c30f-scripts\") pod \"horizon-9b7bf94f7-6x2q4\" (UID: \"b64a755d-846e-4d73-9e25-8dbe3bb5c30f\") " pod="openstack/horizon-9b7bf94f7-6x2q4" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.381956 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b64a755d-846e-4d73-9e25-8dbe3bb5c30f-config-data\") pod \"horizon-9b7bf94f7-6x2q4\" (UID: \"b64a755d-846e-4d73-9e25-8dbe3bb5c30f\") " pod="openstack/horizon-9b7bf94f7-6x2q4" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.384989 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b64a755d-846e-4d73-9e25-8dbe3bb5c30f-horizon-secret-key\") pod \"horizon-9b7bf94f7-6x2q4\" (UID: \"b64a755d-846e-4d73-9e25-8dbe3bb5c30f\") " pod="openstack/horizon-9b7bf94f7-6x2q4" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.397454 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnhgt\" (UniqueName: \"kubernetes.io/projected/a85b3eb5-0945-45b7-875b-e20c2c0e29f7-kube-api-access-bnhgt\") pod \"dnsmasq-dns-785d8bcb8c-x2rtg\" (UID: \"a85b3eb5-0945-45b7-875b-e20c2c0e29f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-x2rtg" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.399213 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwsx5\" (UniqueName: \"kubernetes.io/projected/b64a755d-846e-4d73-9e25-8dbe3bb5c30f-kube-api-access-bwsx5\") pod \"horizon-9b7bf94f7-6x2q4\" (UID: \"b64a755d-846e-4d73-9e25-8dbe3bb5c30f\") " pod="openstack/horizon-9b7bf94f7-6x2q4" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.433765 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9b7bf94f7-6x2q4" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.481352 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fcea235-9da5-4a12-a312-83243be556bd-logs\") pod \"glance-default-external-api-0\" (UID: \"6fcea235-9da5-4a12-a312-83243be556bd\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.481412 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56da7ca5-acf2-4372-9e48-20b829275727-combined-ca-bundle\") pod \"barbican-db-sync-mdq86\" (UID: \"56da7ca5-acf2-4372-9e48-20b829275727\") " pod="openstack/barbican-db-sync-mdq86" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.484346 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2cblc"] Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.486940 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1355b1c-35d4-42d0-8780-7e01dd0b7a8d-scripts\") pod \"placement-db-sync-gfbpg\" (UID: \"e1355b1c-35d4-42d0-8780-7e01dd0b7a8d\") " pod="openstack/placement-db-sync-gfbpg" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.487030 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqx4b\" (UniqueName: \"kubernetes.io/projected/56da7ca5-acf2-4372-9e48-20b829275727-kube-api-access-cqx4b\") pod \"barbican-db-sync-mdq86\" (UID: \"56da7ca5-acf2-4372-9e48-20b829275727\") " pod="openstack/barbican-db-sync-mdq86" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.487052 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fcea235-9da5-4a12-a312-83243be556bd-scripts\") pod \"glance-default-external-api-0\" (UID: \"6fcea235-9da5-4a12-a312-83243be556bd\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.487072 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1355b1c-35d4-42d0-8780-7e01dd0b7a8d-combined-ca-bundle\") pod \"placement-db-sync-gfbpg\" (UID: \"e1355b1c-35d4-42d0-8780-7e01dd0b7a8d\") " pod="openstack/placement-db-sync-gfbpg" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.487094 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1355b1c-35d4-42d0-8780-7e01dd0b7a8d-config-data\") pod \"placement-db-sync-gfbpg\" (UID: \"e1355b1c-35d4-42d0-8780-7e01dd0b7a8d\") " pod="openstack/placement-db-sync-gfbpg" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.487124 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fcea235-9da5-4a12-a312-83243be556bd-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6fcea235-9da5-4a12-a312-83243be556bd\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.487145 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1355b1c-35d4-42d0-8780-7e01dd0b7a8d-logs\") pod \"placement-db-sync-gfbpg\" (UID: \"e1355b1c-35d4-42d0-8780-7e01dd0b7a8d\") " pod="openstack/placement-db-sync-gfbpg" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.487230 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fcea235-9da5-4a12-a312-83243be556bd-config-data\") pod \"glance-default-external-api-0\" (UID: \"6fcea235-9da5-4a12-a312-83243be556bd\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.487260 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xx6t\" (UniqueName: \"kubernetes.io/projected/e1355b1c-35d4-42d0-8780-7e01dd0b7a8d-kube-api-access-2xx6t\") pod \"placement-db-sync-gfbpg\" (UID: \"e1355b1c-35d4-42d0-8780-7e01dd0b7a8d\") " pod="openstack/placement-db-sync-gfbpg" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.487329 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/56da7ca5-acf2-4372-9e48-20b829275727-db-sync-config-data\") pod \"barbican-db-sync-mdq86\" (UID: \"56da7ca5-acf2-4372-9e48-20b829275727\") " pod="openstack/barbican-db-sync-mdq86" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.487356 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"6fcea235-9da5-4a12-a312-83243be556bd\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.487373 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fcea235-9da5-4a12-a312-83243be556bd-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6fcea235-9da5-4a12-a312-83243be556bd\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.487399 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6fcea235-9da5-4a12-a312-83243be556bd-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6fcea235-9da5-4a12-a312-83243be556bd\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.487413 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp4gm\" (UniqueName: \"kubernetes.io/projected/6fcea235-9da5-4a12-a312-83243be556bd-kube-api-access-sp4gm\") pod \"glance-default-external-api-0\" (UID: \"6fcea235-9da5-4a12-a312-83243be556bd\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.491895 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fcea235-9da5-4a12-a312-83243be556bd-logs\") pod \"glance-default-external-api-0\" (UID: \"6fcea235-9da5-4a12-a312-83243be556bd\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.498893 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1355b1c-35d4-42d0-8780-7e01dd0b7a8d-config-data\") pod \"placement-db-sync-gfbpg\" (UID: \"e1355b1c-35d4-42d0-8780-7e01dd0b7a8d\") " pod="openstack/placement-db-sync-gfbpg" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.508060 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fcea235-9da5-4a12-a312-83243be556bd-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6fcea235-9da5-4a12-a312-83243be556bd\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.508429 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1355b1c-35d4-42d0-8780-7e01dd0b7a8d-logs\") pod \"placement-db-sync-gfbpg\" (UID: \"e1355b1c-35d4-42d0-8780-7e01dd0b7a8d\") " pod="openstack/placement-db-sync-gfbpg" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.511717 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fcea235-9da5-4a12-a312-83243be556bd-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6fcea235-9da5-4a12-a312-83243be556bd\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.512242 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-x2rtg" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.512952 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56da7ca5-acf2-4372-9e48-20b829275727-combined-ca-bundle\") pod \"barbican-db-sync-mdq86\" (UID: \"56da7ca5-acf2-4372-9e48-20b829275727\") " pod="openstack/barbican-db-sync-mdq86" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.514036 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6fcea235-9da5-4a12-a312-83243be556bd-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6fcea235-9da5-4a12-a312-83243be556bd\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.514099 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"6fcea235-9da5-4a12-a312-83243be556bd\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.516751 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1355b1c-35d4-42d0-8780-7e01dd0b7a8d-combined-ca-bundle\") pod \"placement-db-sync-gfbpg\" (UID: \"e1355b1c-35d4-42d0-8780-7e01dd0b7a8d\") " pod="openstack/placement-db-sync-gfbpg" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.522779 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xx6t\" (UniqueName: \"kubernetes.io/projected/e1355b1c-35d4-42d0-8780-7e01dd0b7a8d-kube-api-access-2xx6t\") pod \"placement-db-sync-gfbpg\" (UID: \"e1355b1c-35d4-42d0-8780-7e01dd0b7a8d\") " pod="openstack/placement-db-sync-gfbpg" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.523025 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fcea235-9da5-4a12-a312-83243be556bd-scripts\") pod \"glance-default-external-api-0\" (UID: \"6fcea235-9da5-4a12-a312-83243be556bd\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.527351 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fcea235-9da5-4a12-a312-83243be556bd-config-data\") pod \"glance-default-external-api-0\" (UID: \"6fcea235-9da5-4a12-a312-83243be556bd\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.538142 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqx4b\" (UniqueName: \"kubernetes.io/projected/56da7ca5-acf2-4372-9e48-20b829275727-kube-api-access-cqx4b\") pod \"barbican-db-sync-mdq86\" (UID: \"56da7ca5-acf2-4372-9e48-20b829275727\") " pod="openstack/barbican-db-sync-mdq86" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.538215 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1355b1c-35d4-42d0-8780-7e01dd0b7a8d-scripts\") pod \"placement-db-sync-gfbpg\" (UID: \"e1355b1c-35d4-42d0-8780-7e01dd0b7a8d\") " pod="openstack/placement-db-sync-gfbpg" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.544736 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp4gm\" (UniqueName: \"kubernetes.io/projected/6fcea235-9da5-4a12-a312-83243be556bd-kube-api-access-sp4gm\") pod \"glance-default-external-api-0\" (UID: \"6fcea235-9da5-4a12-a312-83243be556bd\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.565148 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/56da7ca5-acf2-4372-9e48-20b829275727-db-sync-config-data\") pod \"barbican-db-sync-mdq86\" (UID: \"56da7ca5-acf2-4372-9e48-20b829275727\") " pod="openstack/barbican-db-sync-mdq86" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.581318 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-mdq86" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.581618 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"6fcea235-9da5-4a12-a312-83243be556bd\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.597548 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-gfbpg" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.604724 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-r5xrf"] Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.648956 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b2dtp"] Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.649215 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b2dtp" podUID="d0e14259-bf4e-47d1-952c-c17076756fd5" containerName="registry-server" containerID="cri-o://b080a9610b49d8d11666af825d11cc07a177aa34548a7c776a49840ba8ba85a5" gracePeriod=2 Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.721677 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-6xt8q"] Feb 02 17:31:11 crc kubenswrapper[4858]: W0202 17:31:11.745357 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5a9fadc_338f_44bb_8ebd_bc4fe01972bf.slice/crio-daab15a97c73c6488f62e69272c2e17d3869233847bdb61d152cb0cb95a75b80 WatchSource:0}: Error finding container daab15a97c73c6488f62e69272c2e17d3869233847bdb61d152cb0cb95a75b80: Status 404 returned error can't find the container with id daab15a97c73c6488f62e69272c2e17d3869233847bdb61d152cb0cb95a75b80 Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.809205 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.810769 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.813178 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.814584 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.817418 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.828383 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.898346 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c004c88f-a1ab-46fd-bb84-65dc1d7790c0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c004c88f-a1ab-46fd-bb84-65dc1d7790c0\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.898387 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c004c88f-a1ab-46fd-bb84-65dc1d7790c0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c004c88f-a1ab-46fd-bb84-65dc1d7790c0\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.898413 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c004c88f-a1ab-46fd-bb84-65dc1d7790c0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c004c88f-a1ab-46fd-bb84-65dc1d7790c0\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.898435 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c004c88f-a1ab-46fd-bb84-65dc1d7790c0-logs\") pod \"glance-default-internal-api-0\" (UID: \"c004c88f-a1ab-46fd-bb84-65dc1d7790c0\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.898459 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds67x\" (UniqueName: \"kubernetes.io/projected/c004c88f-a1ab-46fd-bb84-65dc1d7790c0-kube-api-access-ds67x\") pod \"glance-default-internal-api-0\" (UID: \"c004c88f-a1ab-46fd-bb84-65dc1d7790c0\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.898556 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c004c88f-a1ab-46fd-bb84-65dc1d7790c0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c004c88f-a1ab-46fd-bb84-65dc1d7790c0\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.898580 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"c004c88f-a1ab-46fd-bb84-65dc1d7790c0\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:11 crc kubenswrapper[4858]: I0202 17:31:11.898612 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c004c88f-a1ab-46fd-bb84-65dc1d7790c0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c004c88f-a1ab-46fd-bb84-65dc1d7790c0\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:11.999860 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c004c88f-a1ab-46fd-bb84-65dc1d7790c0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c004c88f-a1ab-46fd-bb84-65dc1d7790c0\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:12.000276 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c004c88f-a1ab-46fd-bb84-65dc1d7790c0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c004c88f-a1ab-46fd-bb84-65dc1d7790c0\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:12.000298 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c004c88f-a1ab-46fd-bb84-65dc1d7790c0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c004c88f-a1ab-46fd-bb84-65dc1d7790c0\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:12.000330 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c004c88f-a1ab-46fd-bb84-65dc1d7790c0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c004c88f-a1ab-46fd-bb84-65dc1d7790c0\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:12.000349 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c004c88f-a1ab-46fd-bb84-65dc1d7790c0-logs\") pod \"glance-default-internal-api-0\" (UID: \"c004c88f-a1ab-46fd-bb84-65dc1d7790c0\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:12.000372 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds67x\" (UniqueName: \"kubernetes.io/projected/c004c88f-a1ab-46fd-bb84-65dc1d7790c0-kube-api-access-ds67x\") pod \"glance-default-internal-api-0\" (UID: \"c004c88f-a1ab-46fd-bb84-65dc1d7790c0\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:12.000436 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c004c88f-a1ab-46fd-bb84-65dc1d7790c0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c004c88f-a1ab-46fd-bb84-65dc1d7790c0\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:12.000459 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"c004c88f-a1ab-46fd-bb84-65dc1d7790c0\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:12.000762 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"c004c88f-a1ab-46fd-bb84-65dc1d7790c0\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:12.001425 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c004c88f-a1ab-46fd-bb84-65dc1d7790c0-logs\") pod \"glance-default-internal-api-0\" (UID: \"c004c88f-a1ab-46fd-bb84-65dc1d7790c0\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:12.004343 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c004c88f-a1ab-46fd-bb84-65dc1d7790c0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c004c88f-a1ab-46fd-bb84-65dc1d7790c0\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:12.011784 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c004c88f-a1ab-46fd-bb84-65dc1d7790c0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c004c88f-a1ab-46fd-bb84-65dc1d7790c0\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:12.013791 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c004c88f-a1ab-46fd-bb84-65dc1d7790c0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c004c88f-a1ab-46fd-bb84-65dc1d7790c0\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:12.014683 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c004c88f-a1ab-46fd-bb84-65dc1d7790c0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c004c88f-a1ab-46fd-bb84-65dc1d7790c0\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:12.026559 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds67x\" (UniqueName: \"kubernetes.io/projected/c004c88f-a1ab-46fd-bb84-65dc1d7790c0-kube-api-access-ds67x\") pod \"glance-default-internal-api-0\" (UID: \"c004c88f-a1ab-46fd-bb84-65dc1d7790c0\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:12.069407 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c004c88f-a1ab-46fd-bb84-65dc1d7790c0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c004c88f-a1ab-46fd-bb84-65dc1d7790c0\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:12.076184 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"c004c88f-a1ab-46fd-bb84-65dc1d7790c0\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:12.089546 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2cblc" event={"ID":"9a10b005-fba9-454e-b8a8-e7ffa96fc978","Type":"ContainerStarted","Data":"5f171854a297a7e5f526a5203124f739c4e7832e73341cdd4452be0c487de97f"} Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:12.096284 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6xt8q" event={"ID":"d5a9fadc-338f-44bb-8ebd-bc4fe01972bf","Type":"ContainerStarted","Data":"daab15a97c73c6488f62e69272c2e17d3869233847bdb61d152cb0cb95a75b80"} Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:12.105707 4858 generic.go:334] "Generic (PLEG): container finished" podID="d0e14259-bf4e-47d1-952c-c17076756fd5" containerID="b080a9610b49d8d11666af825d11cc07a177aa34548a7c776a49840ba8ba85a5" exitCode=0 Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:12.105763 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b2dtp" event={"ID":"d0e14259-bf4e-47d1-952c-c17076756fd5","Type":"ContainerDied","Data":"b080a9610b49d8d11666af825d11cc07a177aa34548a7c776a49840ba8ba85a5"} Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:12.106730 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-r5xrf" event={"ID":"74e45947-b64b-41c2-8b25-04a632777ca1","Type":"ContainerStarted","Data":"2ef74e621ab8222ab85485130595719f2b1c90516079db480e36453a65c7e975"} Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:12.318773 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:13.131779 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2cblc" event={"ID":"9a10b005-fba9-454e-b8a8-e7ffa96fc978","Type":"ContainerStarted","Data":"17c18a68c197dd49f22046cec6a9275bbdace79f384ad7a69c7a7a19904202f4"} Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:13.140425 4858 generic.go:334] "Generic (PLEG): container finished" podID="74e45947-b64b-41c2-8b25-04a632777ca1" containerID="a5a2f79e12a68d1dc70dfa5fa553d4dc04db6b151c3e59b843f9792eb506814d" exitCode=0 Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:13.140482 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-r5xrf" event={"ID":"74e45947-b64b-41c2-8b25-04a632777ca1","Type":"ContainerDied","Data":"a5a2f79e12a68d1dc70dfa5fa553d4dc04db6b151c3e59b843f9792eb506814d"} Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:13.219728 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-2cblc" podStartSLOduration=3.219711559 podStartE2EDuration="3.219711559s" podCreationTimestamp="2026-02-02 17:31:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:31:13.218193636 +0000 UTC m=+974.370608921" watchObservedRunningTime="2026-02-02 17:31:13.219711559 +0000 UTC m=+974.372126824" Feb 02 17:31:13 crc kubenswrapper[4858]: W0202 17:31:13.605720 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f53e6ec_a84c_4b0f_bd1b_20acda3a9451.slice/crio-8e328a67870e07c310ef6053329233411fed8c4be7f5eef890a95a3b8bf02601 WatchSource:0}: Error finding container 8e328a67870e07c310ef6053329233411fed8c4be7f5eef890a95a3b8bf02601: Status 404 returned error can't find the container with id 8e328a67870e07c310ef6053329233411fed8c4be7f5eef890a95a3b8bf02601 Feb 02 17:31:13 crc kubenswrapper[4858]: W0202 17:31:13.615198 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5f00171_8005_4f58_a90a_5f0be6c6a48f.slice/crio-4672551d0726eb625825b03a3cfd37825f953841025762e2ec827cf1bb928e7f WatchSource:0}: Error finding container 4672551d0726eb625825b03a3cfd37825f953841025762e2ec827cf1bb928e7f: Status 404 returned error can't find the container with id 4672551d0726eb625825b03a3cfd37825f953841025762e2ec827cf1bb928e7f Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:13.623823 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:13.636697 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-799987499c-xpcx4"] Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:13.652306 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-799987499c-xpcx4"] Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:13.672529 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:13.686991 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-85955cfd75-8bjzt"] Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:13.689155 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-85955cfd75-8bjzt" Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:13.703642 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-sgmhl"] Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:13.741810 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc9f8ef4-b053-45da-98da-c2050420fcc6-logs\") pod \"horizon-85955cfd75-8bjzt\" (UID: \"dc9f8ef4-b053-45da-98da-c2050420fcc6\") " pod="openstack/horizon-85955cfd75-8bjzt" Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:13.741893 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dc9f8ef4-b053-45da-98da-c2050420fcc6-scripts\") pod \"horizon-85955cfd75-8bjzt\" (UID: \"dc9f8ef4-b053-45da-98da-c2050420fcc6\") " pod="openstack/horizon-85955cfd75-8bjzt" Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:13.742406 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffjkq\" (UniqueName: \"kubernetes.io/projected/dc9f8ef4-b053-45da-98da-c2050420fcc6-kube-api-access-ffjkq\") pod \"horizon-85955cfd75-8bjzt\" (UID: \"dc9f8ef4-b053-45da-98da-c2050420fcc6\") " pod="openstack/horizon-85955cfd75-8bjzt" Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:13.742497 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dc9f8ef4-b053-45da-98da-c2050420fcc6-horizon-secret-key\") pod \"horizon-85955cfd75-8bjzt\" (UID: \"dc9f8ef4-b053-45da-98da-c2050420fcc6\") " pod="openstack/horizon-85955cfd75-8bjzt" Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:13.742646 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc9f8ef4-b053-45da-98da-c2050420fcc6-config-data\") pod \"horizon-85955cfd75-8bjzt\" (UID: \"dc9f8ef4-b053-45da-98da-c2050420fcc6\") " pod="openstack/horizon-85955cfd75-8bjzt" Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:13.749592 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-85955cfd75-8bjzt"] Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:13.768241 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:13.846515 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc9f8ef4-b053-45da-98da-c2050420fcc6-logs\") pod \"horizon-85955cfd75-8bjzt\" (UID: \"dc9f8ef4-b053-45da-98da-c2050420fcc6\") " pod="openstack/horizon-85955cfd75-8bjzt" Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:13.846571 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dc9f8ef4-b053-45da-98da-c2050420fcc6-scripts\") pod \"horizon-85955cfd75-8bjzt\" (UID: \"dc9f8ef4-b053-45da-98da-c2050420fcc6\") " pod="openstack/horizon-85955cfd75-8bjzt" Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:13.846616 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffjkq\" (UniqueName: \"kubernetes.io/projected/dc9f8ef4-b053-45da-98da-c2050420fcc6-kube-api-access-ffjkq\") pod \"horizon-85955cfd75-8bjzt\" (UID: \"dc9f8ef4-b053-45da-98da-c2050420fcc6\") " pod="openstack/horizon-85955cfd75-8bjzt" Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:13.846658 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dc9f8ef4-b053-45da-98da-c2050420fcc6-horizon-secret-key\") pod \"horizon-85955cfd75-8bjzt\" (UID: \"dc9f8ef4-b053-45da-98da-c2050420fcc6\") " pod="openstack/horizon-85955cfd75-8bjzt" Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:13.846700 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc9f8ef4-b053-45da-98da-c2050420fcc6-config-data\") pod \"horizon-85955cfd75-8bjzt\" (UID: \"dc9f8ef4-b053-45da-98da-c2050420fcc6\") " pod="openstack/horizon-85955cfd75-8bjzt" Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:13.848087 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc9f8ef4-b053-45da-98da-c2050420fcc6-config-data\") pod \"horizon-85955cfd75-8bjzt\" (UID: \"dc9f8ef4-b053-45da-98da-c2050420fcc6\") " pod="openstack/horizon-85955cfd75-8bjzt" Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:13.848433 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc9f8ef4-b053-45da-98da-c2050420fcc6-logs\") pod \"horizon-85955cfd75-8bjzt\" (UID: \"dc9f8ef4-b053-45da-98da-c2050420fcc6\") " pod="openstack/horizon-85955cfd75-8bjzt" Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:13.848995 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dc9f8ef4-b053-45da-98da-c2050420fcc6-scripts\") pod \"horizon-85955cfd75-8bjzt\" (UID: \"dc9f8ef4-b053-45da-98da-c2050420fcc6\") " pod="openstack/horizon-85955cfd75-8bjzt" Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:13.856756 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dc9f8ef4-b053-45da-98da-c2050420fcc6-horizon-secret-key\") pod \"horizon-85955cfd75-8bjzt\" (UID: \"dc9f8ef4-b053-45da-98da-c2050420fcc6\") " pod="openstack/horizon-85955cfd75-8bjzt" Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:13.888631 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffjkq\" (UniqueName: \"kubernetes.io/projected/dc9f8ef4-b053-45da-98da-c2050420fcc6-kube-api-access-ffjkq\") pod \"horizon-85955cfd75-8bjzt\" (UID: \"dc9f8ef4-b053-45da-98da-c2050420fcc6\") " pod="openstack/horizon-85955cfd75-8bjzt" Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:13.929224 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b2dtp" Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:13.934169 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:13.948201 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzc5x\" (UniqueName: \"kubernetes.io/projected/d0e14259-bf4e-47d1-952c-c17076756fd5-kube-api-access-lzc5x\") pod \"d0e14259-bf4e-47d1-952c-c17076756fd5\" (UID: \"d0e14259-bf4e-47d1-952c-c17076756fd5\") " Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:13.948317 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0e14259-bf4e-47d1-952c-c17076756fd5-catalog-content\") pod \"d0e14259-bf4e-47d1-952c-c17076756fd5\" (UID: \"d0e14259-bf4e-47d1-952c-c17076756fd5\") " Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:13.948370 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0e14259-bf4e-47d1-952c-c17076756fd5-utilities\") pod \"d0e14259-bf4e-47d1-952c-c17076756fd5\" (UID: \"d0e14259-bf4e-47d1-952c-c17076756fd5\") " Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:13.950333 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0e14259-bf4e-47d1-952c-c17076756fd5-utilities" (OuterVolumeSpecName: "utilities") pod "d0e14259-bf4e-47d1-952c-c17076756fd5" (UID: "d0e14259-bf4e-47d1-952c-c17076756fd5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:13.970138 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-9b7bf94f7-6x2q4"] Feb 02 17:31:13 crc kubenswrapper[4858]: I0202 17:31:13.983549 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0e14259-bf4e-47d1-952c-c17076756fd5-kube-api-access-lzc5x" (OuterVolumeSpecName: "kube-api-access-lzc5x") pod "d0e14259-bf4e-47d1-952c-c17076756fd5" (UID: "d0e14259-bf4e-47d1-952c-c17076756fd5"). InnerVolumeSpecName "kube-api-access-lzc5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.059669 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0e14259-bf4e-47d1-952c-c17076756fd5-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.059713 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzc5x\" (UniqueName: \"kubernetes.io/projected/d0e14259-bf4e-47d1-952c-c17076756fd5-kube-api-access-lzc5x\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.079021 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0e14259-bf4e-47d1-952c-c17076756fd5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d0e14259-bf4e-47d1-952c-c17076756fd5" (UID: "d0e14259-bf4e-47d1-952c-c17076756fd5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.079355 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-mdq86"] Feb 02 17:31:14 crc kubenswrapper[4858]: W0202 17:31:14.085257 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56da7ca5_acf2_4372_9e48_20b829275727.slice/crio-48a4f045594174cfed4981c851385e3d22d1a3d986c3140e3a272ff84cf3492e WatchSource:0}: Error finding container 48a4f045594174cfed4981c851385e3d22d1a3d986c3140e3a272ff84cf3492e: Status 404 returned error can't find the container with id 48a4f045594174cfed4981c851385e3d22d1a3d986c3140e3a272ff84cf3492e Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.103587 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-r5xrf" Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.137967 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-85955cfd75-8bjzt" Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.150604 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-mdq86" event={"ID":"56da7ca5-acf2-4372-9e48-20b829275727","Type":"ContainerStarted","Data":"48a4f045594174cfed4981c851385e3d22d1a3d986c3140e3a272ff84cf3492e"} Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.153506 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b2dtp" event={"ID":"d0e14259-bf4e-47d1-952c-c17076756fd5","Type":"ContainerDied","Data":"a14949e7d5f4ebe13d53a9ce540ff4925afea1f658f3acdf4aaa9a6f9cbf1bdb"} Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.153533 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b2dtp" Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.153564 4858 scope.go:117] "RemoveContainer" containerID="b080a9610b49d8d11666af825d11cc07a177aa34548a7c776a49840ba8ba85a5" Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.161936 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mmx2\" (UniqueName: \"kubernetes.io/projected/74e45947-b64b-41c2-8b25-04a632777ca1-kube-api-access-9mmx2\") pod \"74e45947-b64b-41c2-8b25-04a632777ca1\" (UID: \"74e45947-b64b-41c2-8b25-04a632777ca1\") " Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.161986 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/74e45947-b64b-41c2-8b25-04a632777ca1-dns-swift-storage-0\") pod \"74e45947-b64b-41c2-8b25-04a632777ca1\" (UID: \"74e45947-b64b-41c2-8b25-04a632777ca1\") " Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.162030 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74e45947-b64b-41c2-8b25-04a632777ca1-config\") pod \"74e45947-b64b-41c2-8b25-04a632777ca1\" (UID: \"74e45947-b64b-41c2-8b25-04a632777ca1\") " Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.162069 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74e45947-b64b-41c2-8b25-04a632777ca1-ovsdbserver-nb\") pod \"74e45947-b64b-41c2-8b25-04a632777ca1\" (UID: \"74e45947-b64b-41c2-8b25-04a632777ca1\") " Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.162191 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74e45947-b64b-41c2-8b25-04a632777ca1-dns-svc\") pod \"74e45947-b64b-41c2-8b25-04a632777ca1\" (UID: \"74e45947-b64b-41c2-8b25-04a632777ca1\") " Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.162270 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74e45947-b64b-41c2-8b25-04a632777ca1-ovsdbserver-sb\") pod \"74e45947-b64b-41c2-8b25-04a632777ca1\" (UID: \"74e45947-b64b-41c2-8b25-04a632777ca1\") " Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.162535 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-r5xrf" Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.162575 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0e14259-bf4e-47d1-952c-c17076756fd5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.162933 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-r5xrf" event={"ID":"74e45947-b64b-41c2-8b25-04a632777ca1","Type":"ContainerDied","Data":"2ef74e621ab8222ab85485130595719f2b1c90516079db480e36453a65c7e975"} Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.168782 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sgmhl" event={"ID":"d5f00171-8005-4f58-a90a-5f0be6c6a48f","Type":"ContainerStarted","Data":"0dd7abadbe4ebe5d95892a5fcbf84962471de4be3deb29b052c9ba7afbec7b2c"} Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.168817 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sgmhl" event={"ID":"d5f00171-8005-4f58-a90a-5f0be6c6a48f","Type":"ContainerStarted","Data":"4672551d0726eb625825b03a3cfd37825f953841025762e2ec827cf1bb928e7f"} Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.180731 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-799987499c-xpcx4" event={"ID":"9f53e6ec-a84c-4b0f-bd1b-20acda3a9451","Type":"ContainerStarted","Data":"8e328a67870e07c310ef6053329233411fed8c4be7f5eef890a95a3b8bf02601"} Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.188662 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f4c2e0-6bac-4c5a-affd-48f2d8301111","Type":"ContainerStarted","Data":"1f7ef965e23a29514664736538ca6e53b6e2913e318e7b595a31dc91bb52bc54"} Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.202198 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-sgmhl" podStartSLOduration=4.202175934 podStartE2EDuration="4.202175934s" podCreationTimestamp="2026-02-02 17:31:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:31:14.188477733 +0000 UTC m=+975.340893018" watchObservedRunningTime="2026-02-02 17:31:14.202175934 +0000 UTC m=+975.354591209" Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.206511 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74e45947-b64b-41c2-8b25-04a632777ca1-kube-api-access-9mmx2" (OuterVolumeSpecName: "kube-api-access-9mmx2") pod "74e45947-b64b-41c2-8b25-04a632777ca1" (UID: "74e45947-b64b-41c2-8b25-04a632777ca1"). InnerVolumeSpecName "kube-api-access-9mmx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.206593 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9b7bf94f7-6x2q4" event={"ID":"b64a755d-846e-4d73-9e25-8dbe3bb5c30f","Type":"ContainerStarted","Data":"8a49043e05c16d6c103a3f3128f6ff118e146c9ba0291651376ba62680a861b0"} Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.218160 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74e45947-b64b-41c2-8b25-04a632777ca1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "74e45947-b64b-41c2-8b25-04a632777ca1" (UID: "74e45947-b64b-41c2-8b25-04a632777ca1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.223650 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74e45947-b64b-41c2-8b25-04a632777ca1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "74e45947-b64b-41c2-8b25-04a632777ca1" (UID: "74e45947-b64b-41c2-8b25-04a632777ca1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.230618 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74e45947-b64b-41c2-8b25-04a632777ca1-config" (OuterVolumeSpecName: "config") pod "74e45947-b64b-41c2-8b25-04a632777ca1" (UID: "74e45947-b64b-41c2-8b25-04a632777ca1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.235815 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74e45947-b64b-41c2-8b25-04a632777ca1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "74e45947-b64b-41c2-8b25-04a632777ca1" (UID: "74e45947-b64b-41c2-8b25-04a632777ca1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.240268 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b2dtp"] Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.254872 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-x2rtg"] Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.256421 4858 scope.go:117] "RemoveContainer" containerID="13804e8dac9eba4684630a3d44b00b02724230a7c8ed82ea16d1c3dd774e6785" Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.276932 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b2dtp"] Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.284669 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74e45947-b64b-41c2-8b25-04a632777ca1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "74e45947-b64b-41c2-8b25-04a632777ca1" (UID: "74e45947-b64b-41c2-8b25-04a632777ca1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.295870 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74e45947-b64b-41c2-8b25-04a632777ca1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.295904 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74e45947-b64b-41c2-8b25-04a632777ca1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.295920 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mmx2\" (UniqueName: \"kubernetes.io/projected/74e45947-b64b-41c2-8b25-04a632777ca1-kube-api-access-9mmx2\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.295930 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/74e45947-b64b-41c2-8b25-04a632777ca1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.295938 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74e45947-b64b-41c2-8b25-04a632777ca1-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.295946 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74e45947-b64b-41c2-8b25-04a632777ca1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.315293 4858 scope.go:117] "RemoveContainer" containerID="a290d78cfebda816319cd79bd34d4a5368187947e1f8a7976fd7673b4f5f5565" Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.317063 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-gfbpg"] Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.405729 4858 scope.go:117] "RemoveContainer" containerID="a5a2f79e12a68d1dc70dfa5fa553d4dc04db6b151c3e59b843f9792eb506814d" Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.445371 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0e14259-bf4e-47d1-952c-c17076756fd5" path="/var/lib/kubelet/pods/d0e14259-bf4e-47d1-952c-c17076756fd5/volumes" Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.666859 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-r5xrf"] Feb 02 17:31:14 crc kubenswrapper[4858]: I0202 17:31:14.676184 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-r5xrf"] Feb 02 17:31:15 crc kubenswrapper[4858]: I0202 17:31:15.006317 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-85955cfd75-8bjzt"] Feb 02 17:31:15 crc kubenswrapper[4858]: I0202 17:31:15.027405 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 17:31:15 crc kubenswrapper[4858]: W0202 17:31:15.028959 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc9f8ef4_b053_45da_98da_c2050420fcc6.slice/crio-f92d2cd1fe5542697f5ff52c8f88373e4fdebae3a3d117198968643b8bc015af WatchSource:0}: Error finding container f92d2cd1fe5542697f5ff52c8f88373e4fdebae3a3d117198968643b8bc015af: Status 404 returned error can't find the container with id f92d2cd1fe5542697f5ff52c8f88373e4fdebae3a3d117198968643b8bc015af Feb 02 17:31:15 crc kubenswrapper[4858]: I0202 17:31:15.084377 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 17:31:15 crc kubenswrapper[4858]: W0202 17:31:15.103113 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc004c88f_a1ab_46fd_bb84_65dc1d7790c0.slice/crio-db55197a4a6b1b6a09b17f52caeae6e30a9829564909a22e945110d73d590963 WatchSource:0}: Error finding container db55197a4a6b1b6a09b17f52caeae6e30a9829564909a22e945110d73d590963: Status 404 returned error can't find the container with id db55197a4a6b1b6a09b17f52caeae6e30a9829564909a22e945110d73d590963 Feb 02 17:31:15 crc kubenswrapper[4858]: I0202 17:31:15.256261 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6fcea235-9da5-4a12-a312-83243be556bd","Type":"ContainerStarted","Data":"0a4e87065223244fed9be00309757298b6611b295add6fd7141cc4b51e5cbd2a"} Feb 02 17:31:15 crc kubenswrapper[4858]: I0202 17:31:15.258881 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c004c88f-a1ab-46fd-bb84-65dc1d7790c0","Type":"ContainerStarted","Data":"db55197a4a6b1b6a09b17f52caeae6e30a9829564909a22e945110d73d590963"} Feb 02 17:31:15 crc kubenswrapper[4858]: I0202 17:31:15.282837 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-x2rtg" event={"ID":"a85b3eb5-0945-45b7-875b-e20c2c0e29f7","Type":"ContainerStarted","Data":"9d26e942381568751f95b01992ae9e1f51e5f64cc97829f654707063346dd694"} Feb 02 17:31:15 crc kubenswrapper[4858]: I0202 17:31:15.286458 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-85955cfd75-8bjzt" event={"ID":"dc9f8ef4-b053-45da-98da-c2050420fcc6","Type":"ContainerStarted","Data":"f92d2cd1fe5542697f5ff52c8f88373e4fdebae3a3d117198968643b8bc015af"} Feb 02 17:31:15 crc kubenswrapper[4858]: I0202 17:31:15.290530 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-gfbpg" event={"ID":"e1355b1c-35d4-42d0-8780-7e01dd0b7a8d","Type":"ContainerStarted","Data":"ee7fc42da061c4dfebfd296535e55b9f7a7721824af3981e27f36dd0d4251288"} Feb 02 17:31:16 crc kubenswrapper[4858]: I0202 17:31:16.315953 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6fcea235-9da5-4a12-a312-83243be556bd","Type":"ContainerStarted","Data":"04212a4550e174d6fb9cad97613eef9e31790cb7d4eeb63fabf19f4ac0711f2d"} Feb 02 17:31:16 crc kubenswrapper[4858]: I0202 17:31:16.318590 4858 generic.go:334] "Generic (PLEG): container finished" podID="a85b3eb5-0945-45b7-875b-e20c2c0e29f7" containerID="564fe604ab337a5d5ab3f10a2ca26b080451c39a773bb145a019cb89c5ba723a" exitCode=0 Feb 02 17:31:16 crc kubenswrapper[4858]: I0202 17:31:16.318684 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-x2rtg" event={"ID":"a85b3eb5-0945-45b7-875b-e20c2c0e29f7","Type":"ContainerDied","Data":"564fe604ab337a5d5ab3f10a2ca26b080451c39a773bb145a019cb89c5ba723a"} Feb 02 17:31:16 crc kubenswrapper[4858]: I0202 17:31:16.330159 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c004c88f-a1ab-46fd-bb84-65dc1d7790c0","Type":"ContainerStarted","Data":"6f57b4fd25e9089fe610f3bba0b905d886920f36552692063ade2571971d5948"} Feb 02 17:31:16 crc kubenswrapper[4858]: I0202 17:31:16.422855 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74e45947-b64b-41c2-8b25-04a632777ca1" path="/var/lib/kubelet/pods/74e45947-b64b-41c2-8b25-04a632777ca1/volumes" Feb 02 17:31:17 crc kubenswrapper[4858]: I0202 17:31:17.345455 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6fcea235-9da5-4a12-a312-83243be556bd","Type":"ContainerStarted","Data":"b9d7667c1cf2892653de46b42b17ec3909cebd86dee445f51fd0551bc2697832"} Feb 02 17:31:17 crc kubenswrapper[4858]: I0202 17:31:17.345548 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6fcea235-9da5-4a12-a312-83243be556bd" containerName="glance-log" containerID="cri-o://04212a4550e174d6fb9cad97613eef9e31790cb7d4eeb63fabf19f4ac0711f2d" gracePeriod=30 Feb 02 17:31:17 crc kubenswrapper[4858]: I0202 17:31:17.345964 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6fcea235-9da5-4a12-a312-83243be556bd" containerName="glance-httpd" containerID="cri-o://b9d7667c1cf2892653de46b42b17ec3909cebd86dee445f51fd0551bc2697832" gracePeriod=30 Feb 02 17:31:17 crc kubenswrapper[4858]: I0202 17:31:17.389404 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c004c88f-a1ab-46fd-bb84-65dc1d7790c0" containerName="glance-log" containerID="cri-o://6f57b4fd25e9089fe610f3bba0b905d886920f36552692063ade2571971d5948" gracePeriod=30 Feb 02 17:31:17 crc kubenswrapper[4858]: I0202 17:31:17.389635 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c004c88f-a1ab-46fd-bb84-65dc1d7790c0","Type":"ContainerStarted","Data":"48c8143009091c0dae03a566508692db0cc95e8fd7f7229c6b99910257524367"} Feb 02 17:31:17 crc kubenswrapper[4858]: I0202 17:31:17.389688 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c004c88f-a1ab-46fd-bb84-65dc1d7790c0" containerName="glance-httpd" containerID="cri-o://48c8143009091c0dae03a566508692db0cc95e8fd7f7229c6b99910257524367" gracePeriod=30 Feb 02 17:31:17 crc kubenswrapper[4858]: I0202 17:31:17.401712 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.401688759 podStartE2EDuration="7.401688759s" podCreationTimestamp="2026-02-02 17:31:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:31:17.389952693 +0000 UTC m=+978.542367958" watchObservedRunningTime="2026-02-02 17:31:17.401688759 +0000 UTC m=+978.554104024" Feb 02 17:31:17 crc kubenswrapper[4858]: I0202 17:31:17.426656 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-x2rtg" event={"ID":"a85b3eb5-0945-45b7-875b-e20c2c0e29f7","Type":"ContainerStarted","Data":"9064ecb9ffc75e6be7cdfc0cb9f8e4662fcc85336316095459f8aa3519d0de96"} Feb 02 17:31:17 crc kubenswrapper[4858]: I0202 17:31:17.428559 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-x2rtg" Feb 02 17:31:17 crc kubenswrapper[4858]: I0202 17:31:17.448712 4858 generic.go:334] "Generic (PLEG): container finished" podID="9a10b005-fba9-454e-b8a8-e7ffa96fc978" containerID="17c18a68c197dd49f22046cec6a9275bbdace79f384ad7a69c7a7a19904202f4" exitCode=0 Feb 02 17:31:17 crc kubenswrapper[4858]: I0202 17:31:17.448758 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2cblc" event={"ID":"9a10b005-fba9-454e-b8a8-e7ffa96fc978","Type":"ContainerDied","Data":"17c18a68c197dd49f22046cec6a9275bbdace79f384ad7a69c7a7a19904202f4"} Feb 02 17:31:17 crc kubenswrapper[4858]: I0202 17:31:17.508701 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-x2rtg" podStartSLOduration=7.508686198 podStartE2EDuration="7.508686198s" podCreationTimestamp="2026-02-02 17:31:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:31:17.48044803 +0000 UTC m=+978.632863295" watchObservedRunningTime="2026-02-02 17:31:17.508686198 +0000 UTC m=+978.661101463" Feb 02 17:31:17 crc kubenswrapper[4858]: I0202 17:31:17.509417 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.509411898 podStartE2EDuration="7.509411898s" podCreationTimestamp="2026-02-02 17:31:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:31:17.437282776 +0000 UTC m=+978.589698041" watchObservedRunningTime="2026-02-02 17:31:17.509411898 +0000 UTC m=+978.661827163" Feb 02 17:31:18 crc kubenswrapper[4858]: I0202 17:31:18.475218 4858 generic.go:334] "Generic (PLEG): container finished" podID="6fcea235-9da5-4a12-a312-83243be556bd" containerID="b9d7667c1cf2892653de46b42b17ec3909cebd86dee445f51fd0551bc2697832" exitCode=0 Feb 02 17:31:18 crc kubenswrapper[4858]: I0202 17:31:18.475314 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6fcea235-9da5-4a12-a312-83243be556bd","Type":"ContainerDied","Data":"b9d7667c1cf2892653de46b42b17ec3909cebd86dee445f51fd0551bc2697832"} Feb 02 17:31:18 crc kubenswrapper[4858]: I0202 17:31:18.475950 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6fcea235-9da5-4a12-a312-83243be556bd","Type":"ContainerDied","Data":"04212a4550e174d6fb9cad97613eef9e31790cb7d4eeb63fabf19f4ac0711f2d"} Feb 02 17:31:18 crc kubenswrapper[4858]: I0202 17:31:18.475778 4858 generic.go:334] "Generic (PLEG): container finished" podID="6fcea235-9da5-4a12-a312-83243be556bd" containerID="04212a4550e174d6fb9cad97613eef9e31790cb7d4eeb63fabf19f4ac0711f2d" exitCode=143 Feb 02 17:31:18 crc kubenswrapper[4858]: I0202 17:31:18.479109 4858 generic.go:334] "Generic (PLEG): container finished" podID="c004c88f-a1ab-46fd-bb84-65dc1d7790c0" containerID="48c8143009091c0dae03a566508692db0cc95e8fd7f7229c6b99910257524367" exitCode=0 Feb 02 17:31:18 crc kubenswrapper[4858]: I0202 17:31:18.479138 4858 generic.go:334] "Generic (PLEG): container finished" podID="c004c88f-a1ab-46fd-bb84-65dc1d7790c0" containerID="6f57b4fd25e9089fe610f3bba0b905d886920f36552692063ade2571971d5948" exitCode=143 Feb 02 17:31:18 crc kubenswrapper[4858]: I0202 17:31:18.479189 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c004c88f-a1ab-46fd-bb84-65dc1d7790c0","Type":"ContainerDied","Data":"48c8143009091c0dae03a566508692db0cc95e8fd7f7229c6b99910257524367"} Feb 02 17:31:18 crc kubenswrapper[4858]: I0202 17:31:18.479241 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c004c88f-a1ab-46fd-bb84-65dc1d7790c0","Type":"ContainerDied","Data":"6f57b4fd25e9089fe610f3bba0b905d886920f36552692063ade2571971d5948"} Feb 02 17:31:19 crc kubenswrapper[4858]: I0202 17:31:19.835522 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-9b7bf94f7-6x2q4"] Feb 02 17:31:19 crc kubenswrapper[4858]: I0202 17:31:19.881960 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-857c87669d-c45h7"] Feb 02 17:31:19 crc kubenswrapper[4858]: E0202 17:31:19.886219 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0e14259-bf4e-47d1-952c-c17076756fd5" containerName="extract-utilities" Feb 02 17:31:19 crc kubenswrapper[4858]: I0202 17:31:19.886262 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0e14259-bf4e-47d1-952c-c17076756fd5" containerName="extract-utilities" Feb 02 17:31:19 crc kubenswrapper[4858]: E0202 17:31:19.886291 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0e14259-bf4e-47d1-952c-c17076756fd5" containerName="extract-content" Feb 02 17:31:19 crc kubenswrapper[4858]: I0202 17:31:19.886300 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0e14259-bf4e-47d1-952c-c17076756fd5" containerName="extract-content" Feb 02 17:31:19 crc kubenswrapper[4858]: E0202 17:31:19.886317 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74e45947-b64b-41c2-8b25-04a632777ca1" containerName="init" Feb 02 17:31:19 crc kubenswrapper[4858]: I0202 17:31:19.886326 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="74e45947-b64b-41c2-8b25-04a632777ca1" containerName="init" Feb 02 17:31:19 crc kubenswrapper[4858]: E0202 17:31:19.886350 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0e14259-bf4e-47d1-952c-c17076756fd5" containerName="registry-server" Feb 02 17:31:19 crc kubenswrapper[4858]: I0202 17:31:19.886359 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0e14259-bf4e-47d1-952c-c17076756fd5" containerName="registry-server" Feb 02 17:31:19 crc kubenswrapper[4858]: I0202 17:31:19.886653 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0e14259-bf4e-47d1-952c-c17076756fd5" containerName="registry-server" Feb 02 17:31:19 crc kubenswrapper[4858]: I0202 17:31:19.886672 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="74e45947-b64b-41c2-8b25-04a632777ca1" containerName="init" Feb 02 17:31:19 crc kubenswrapper[4858]: I0202 17:31:19.887871 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-857c87669d-c45h7" Feb 02 17:31:19 crc kubenswrapper[4858]: I0202 17:31:19.891259 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 02 17:31:19 crc kubenswrapper[4858]: I0202 17:31:19.905311 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-857c87669d-c45h7"] Feb 02 17:31:19 crc kubenswrapper[4858]: I0202 17:31:19.942139 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24d5a090-abc7-4832-b6c6-2e36edf7d82e-logs\") pod \"horizon-857c87669d-c45h7\" (UID: \"24d5a090-abc7-4832-b6c6-2e36edf7d82e\") " pod="openstack/horizon-857c87669d-c45h7" Feb 02 17:31:19 crc kubenswrapper[4858]: I0202 17:31:19.942216 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6lwx\" (UniqueName: \"kubernetes.io/projected/24d5a090-abc7-4832-b6c6-2e36edf7d82e-kube-api-access-g6lwx\") pod \"horizon-857c87669d-c45h7\" (UID: \"24d5a090-abc7-4832-b6c6-2e36edf7d82e\") " pod="openstack/horizon-857c87669d-c45h7" Feb 02 17:31:19 crc kubenswrapper[4858]: I0202 17:31:19.942250 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24d5a090-abc7-4832-b6c6-2e36edf7d82e-scripts\") pod \"horizon-857c87669d-c45h7\" (UID: \"24d5a090-abc7-4832-b6c6-2e36edf7d82e\") " pod="openstack/horizon-857c87669d-c45h7" Feb 02 17:31:19 crc kubenswrapper[4858]: I0202 17:31:19.942286 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d5a090-abc7-4832-b6c6-2e36edf7d82e-horizon-tls-certs\") pod \"horizon-857c87669d-c45h7\" (UID: \"24d5a090-abc7-4832-b6c6-2e36edf7d82e\") " pod="openstack/horizon-857c87669d-c45h7" Feb 02 17:31:19 crc kubenswrapper[4858]: I0202 17:31:19.942327 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/24d5a090-abc7-4832-b6c6-2e36edf7d82e-horizon-secret-key\") pod \"horizon-857c87669d-c45h7\" (UID: \"24d5a090-abc7-4832-b6c6-2e36edf7d82e\") " pod="openstack/horizon-857c87669d-c45h7" Feb 02 17:31:19 crc kubenswrapper[4858]: I0202 17:31:19.942401 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d5a090-abc7-4832-b6c6-2e36edf7d82e-combined-ca-bundle\") pod \"horizon-857c87669d-c45h7\" (UID: \"24d5a090-abc7-4832-b6c6-2e36edf7d82e\") " pod="openstack/horizon-857c87669d-c45h7" Feb 02 17:31:19 crc kubenswrapper[4858]: I0202 17:31:19.942437 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/24d5a090-abc7-4832-b6c6-2e36edf7d82e-config-data\") pod \"horizon-857c87669d-c45h7\" (UID: \"24d5a090-abc7-4832-b6c6-2e36edf7d82e\") " pod="openstack/horizon-857c87669d-c45h7" Feb 02 17:31:19 crc kubenswrapper[4858]: I0202 17:31:19.965431 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-85955cfd75-8bjzt"] Feb 02 17:31:20 crc kubenswrapper[4858]: I0202 17:31:20.009078 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-68f4b57796-rhdnw"] Feb 02 17:31:20 crc kubenswrapper[4858]: I0202 17:31:20.010433 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68f4b57796-rhdnw" Feb 02 17:31:20 crc kubenswrapper[4858]: I0202 17:31:20.024178 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-68f4b57796-rhdnw"] Feb 02 17:31:20 crc kubenswrapper[4858]: I0202 17:31:20.045429 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d5a090-abc7-4832-b6c6-2e36edf7d82e-combined-ca-bundle\") pod \"horizon-857c87669d-c45h7\" (UID: \"24d5a090-abc7-4832-b6c6-2e36edf7d82e\") " pod="openstack/horizon-857c87669d-c45h7" Feb 02 17:31:20 crc kubenswrapper[4858]: I0202 17:31:20.045485 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/24d5a090-abc7-4832-b6c6-2e36edf7d82e-config-data\") pod \"horizon-857c87669d-c45h7\" (UID: \"24d5a090-abc7-4832-b6c6-2e36edf7d82e\") " pod="openstack/horizon-857c87669d-c45h7" Feb 02 17:31:20 crc kubenswrapper[4858]: I0202 17:31:20.045509 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24d5a090-abc7-4832-b6c6-2e36edf7d82e-logs\") pod \"horizon-857c87669d-c45h7\" (UID: \"24d5a090-abc7-4832-b6c6-2e36edf7d82e\") " pod="openstack/horizon-857c87669d-c45h7" Feb 02 17:31:20 crc kubenswrapper[4858]: I0202 17:31:20.045531 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6lwx\" (UniqueName: \"kubernetes.io/projected/24d5a090-abc7-4832-b6c6-2e36edf7d82e-kube-api-access-g6lwx\") pod \"horizon-857c87669d-c45h7\" (UID: \"24d5a090-abc7-4832-b6c6-2e36edf7d82e\") " pod="openstack/horizon-857c87669d-c45h7" Feb 02 17:31:20 crc kubenswrapper[4858]: I0202 17:31:20.045560 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24d5a090-abc7-4832-b6c6-2e36edf7d82e-scripts\") pod \"horizon-857c87669d-c45h7\" (UID: \"24d5a090-abc7-4832-b6c6-2e36edf7d82e\") " pod="openstack/horizon-857c87669d-c45h7" Feb 02 17:31:20 crc kubenswrapper[4858]: I0202 17:31:20.045585 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d5a090-abc7-4832-b6c6-2e36edf7d82e-horizon-tls-certs\") pod \"horizon-857c87669d-c45h7\" (UID: \"24d5a090-abc7-4832-b6c6-2e36edf7d82e\") " pod="openstack/horizon-857c87669d-c45h7" Feb 02 17:31:20 crc kubenswrapper[4858]: I0202 17:31:20.045615 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/24d5a090-abc7-4832-b6c6-2e36edf7d82e-horizon-secret-key\") pod \"horizon-857c87669d-c45h7\" (UID: \"24d5a090-abc7-4832-b6c6-2e36edf7d82e\") " pod="openstack/horizon-857c87669d-c45h7" Feb 02 17:31:20 crc kubenswrapper[4858]: I0202 17:31:20.046394 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24d5a090-abc7-4832-b6c6-2e36edf7d82e-scripts\") pod \"horizon-857c87669d-c45h7\" (UID: \"24d5a090-abc7-4832-b6c6-2e36edf7d82e\") " pod="openstack/horizon-857c87669d-c45h7" Feb 02 17:31:20 crc kubenswrapper[4858]: I0202 17:31:20.047807 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24d5a090-abc7-4832-b6c6-2e36edf7d82e-logs\") pod \"horizon-857c87669d-c45h7\" (UID: \"24d5a090-abc7-4832-b6c6-2e36edf7d82e\") " pod="openstack/horizon-857c87669d-c45h7" Feb 02 17:31:20 crc kubenswrapper[4858]: I0202 17:31:20.048720 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/24d5a090-abc7-4832-b6c6-2e36edf7d82e-config-data\") pod \"horizon-857c87669d-c45h7\" (UID: \"24d5a090-abc7-4832-b6c6-2e36edf7d82e\") " pod="openstack/horizon-857c87669d-c45h7" Feb 02 17:31:20 crc kubenswrapper[4858]: I0202 17:31:20.052472 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d5a090-abc7-4832-b6c6-2e36edf7d82e-combined-ca-bundle\") pod \"horizon-857c87669d-c45h7\" (UID: \"24d5a090-abc7-4832-b6c6-2e36edf7d82e\") " pod="openstack/horizon-857c87669d-c45h7" Feb 02 17:31:20 crc kubenswrapper[4858]: I0202 17:31:20.060125 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/24d5a090-abc7-4832-b6c6-2e36edf7d82e-horizon-secret-key\") pod \"horizon-857c87669d-c45h7\" (UID: \"24d5a090-abc7-4832-b6c6-2e36edf7d82e\") " pod="openstack/horizon-857c87669d-c45h7" Feb 02 17:31:20 crc kubenswrapper[4858]: I0202 17:31:20.062872 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d5a090-abc7-4832-b6c6-2e36edf7d82e-horizon-tls-certs\") pod \"horizon-857c87669d-c45h7\" (UID: \"24d5a090-abc7-4832-b6c6-2e36edf7d82e\") " pod="openstack/horizon-857c87669d-c45h7" Feb 02 17:31:20 crc kubenswrapper[4858]: I0202 17:31:20.064079 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6lwx\" (UniqueName: \"kubernetes.io/projected/24d5a090-abc7-4832-b6c6-2e36edf7d82e-kube-api-access-g6lwx\") pod \"horizon-857c87669d-c45h7\" (UID: \"24d5a090-abc7-4832-b6c6-2e36edf7d82e\") " pod="openstack/horizon-857c87669d-c45h7" Feb 02 17:31:20 crc kubenswrapper[4858]: I0202 17:31:20.147347 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4a208969-437b-449b-ba53-89364175a52a-scripts\") pod \"horizon-68f4b57796-rhdnw\" (UID: \"4a208969-437b-449b-ba53-89364175a52a\") " pod="openstack/horizon-68f4b57796-rhdnw" Feb 02 17:31:20 crc kubenswrapper[4858]: I0202 17:31:20.147401 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q72v\" (UniqueName: \"kubernetes.io/projected/4a208969-437b-449b-ba53-89364175a52a-kube-api-access-6q72v\") pod \"horizon-68f4b57796-rhdnw\" (UID: \"4a208969-437b-449b-ba53-89364175a52a\") " pod="openstack/horizon-68f4b57796-rhdnw" Feb 02 17:31:20 crc kubenswrapper[4858]: I0202 17:31:20.147423 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a208969-437b-449b-ba53-89364175a52a-logs\") pod \"horizon-68f4b57796-rhdnw\" (UID: \"4a208969-437b-449b-ba53-89364175a52a\") " pod="openstack/horizon-68f4b57796-rhdnw" Feb 02 17:31:20 crc kubenswrapper[4858]: I0202 17:31:20.147623 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4a208969-437b-449b-ba53-89364175a52a-horizon-secret-key\") pod \"horizon-68f4b57796-rhdnw\" (UID: \"4a208969-437b-449b-ba53-89364175a52a\") " pod="openstack/horizon-68f4b57796-rhdnw" Feb 02 17:31:20 crc kubenswrapper[4858]: I0202 17:31:20.147645 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a208969-437b-449b-ba53-89364175a52a-combined-ca-bundle\") pod \"horizon-68f4b57796-rhdnw\" (UID: \"4a208969-437b-449b-ba53-89364175a52a\") " pod="openstack/horizon-68f4b57796-rhdnw" Feb 02 17:31:20 crc kubenswrapper[4858]: I0202 17:31:20.147707 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a208969-437b-449b-ba53-89364175a52a-horizon-tls-certs\") pod \"horizon-68f4b57796-rhdnw\" (UID: \"4a208969-437b-449b-ba53-89364175a52a\") " pod="openstack/horizon-68f4b57796-rhdnw" Feb 02 17:31:20 crc kubenswrapper[4858]: I0202 17:31:20.147726 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4a208969-437b-449b-ba53-89364175a52a-config-data\") pod \"horizon-68f4b57796-rhdnw\" (UID: \"4a208969-437b-449b-ba53-89364175a52a\") " pod="openstack/horizon-68f4b57796-rhdnw" Feb 02 17:31:20 crc kubenswrapper[4858]: I0202 17:31:20.220442 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-857c87669d-c45h7" Feb 02 17:31:20 crc kubenswrapper[4858]: I0202 17:31:20.253177 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q72v\" (UniqueName: \"kubernetes.io/projected/4a208969-437b-449b-ba53-89364175a52a-kube-api-access-6q72v\") pod \"horizon-68f4b57796-rhdnw\" (UID: \"4a208969-437b-449b-ba53-89364175a52a\") " pod="openstack/horizon-68f4b57796-rhdnw" Feb 02 17:31:20 crc kubenswrapper[4858]: I0202 17:31:20.253529 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a208969-437b-449b-ba53-89364175a52a-logs\") pod \"horizon-68f4b57796-rhdnw\" (UID: \"4a208969-437b-449b-ba53-89364175a52a\") " pod="openstack/horizon-68f4b57796-rhdnw" Feb 02 17:31:20 crc kubenswrapper[4858]: I0202 17:31:20.253905 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a208969-437b-449b-ba53-89364175a52a-logs\") pod \"horizon-68f4b57796-rhdnw\" (UID: \"4a208969-437b-449b-ba53-89364175a52a\") " pod="openstack/horizon-68f4b57796-rhdnw" Feb 02 17:31:20 crc kubenswrapper[4858]: I0202 17:31:20.254090 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4a208969-437b-449b-ba53-89364175a52a-horizon-secret-key\") pod \"horizon-68f4b57796-rhdnw\" (UID: \"4a208969-437b-449b-ba53-89364175a52a\") " pod="openstack/horizon-68f4b57796-rhdnw" Feb 02 17:31:20 crc kubenswrapper[4858]: I0202 17:31:20.254115 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a208969-437b-449b-ba53-89364175a52a-combined-ca-bundle\") pod \"horizon-68f4b57796-rhdnw\" (UID: \"4a208969-437b-449b-ba53-89364175a52a\") " pod="openstack/horizon-68f4b57796-rhdnw" Feb 02 17:31:20 crc kubenswrapper[4858]: I0202 17:31:20.254742 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a208969-437b-449b-ba53-89364175a52a-horizon-tls-certs\") pod \"horizon-68f4b57796-rhdnw\" (UID: \"4a208969-437b-449b-ba53-89364175a52a\") " pod="openstack/horizon-68f4b57796-rhdnw" Feb 02 17:31:20 crc kubenswrapper[4858]: I0202 17:31:20.255099 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4a208969-437b-449b-ba53-89364175a52a-config-data\") pod \"horizon-68f4b57796-rhdnw\" (UID: \"4a208969-437b-449b-ba53-89364175a52a\") " pod="openstack/horizon-68f4b57796-rhdnw" Feb 02 17:31:20 crc kubenswrapper[4858]: I0202 17:31:20.255168 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4a208969-437b-449b-ba53-89364175a52a-scripts\") pod \"horizon-68f4b57796-rhdnw\" (UID: \"4a208969-437b-449b-ba53-89364175a52a\") " pod="openstack/horizon-68f4b57796-rhdnw" Feb 02 17:31:20 crc kubenswrapper[4858]: I0202 17:31:20.255769 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4a208969-437b-449b-ba53-89364175a52a-scripts\") pod \"horizon-68f4b57796-rhdnw\" (UID: \"4a208969-437b-449b-ba53-89364175a52a\") " pod="openstack/horizon-68f4b57796-rhdnw" Feb 02 17:31:20 crc kubenswrapper[4858]: I0202 17:31:20.256878 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4a208969-437b-449b-ba53-89364175a52a-config-data\") pod \"horizon-68f4b57796-rhdnw\" (UID: \"4a208969-437b-449b-ba53-89364175a52a\") " pod="openstack/horizon-68f4b57796-rhdnw" Feb 02 17:31:20 crc kubenswrapper[4858]: I0202 17:31:20.264099 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a208969-437b-449b-ba53-89364175a52a-combined-ca-bundle\") pod \"horizon-68f4b57796-rhdnw\" (UID: \"4a208969-437b-449b-ba53-89364175a52a\") " pod="openstack/horizon-68f4b57796-rhdnw" Feb 02 17:31:20 crc kubenswrapper[4858]: I0202 17:31:20.264441 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a208969-437b-449b-ba53-89364175a52a-horizon-tls-certs\") pod \"horizon-68f4b57796-rhdnw\" (UID: \"4a208969-437b-449b-ba53-89364175a52a\") " pod="openstack/horizon-68f4b57796-rhdnw" Feb 02 17:31:20 crc kubenswrapper[4858]: I0202 17:31:20.264780 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4a208969-437b-449b-ba53-89364175a52a-horizon-secret-key\") pod \"horizon-68f4b57796-rhdnw\" (UID: \"4a208969-437b-449b-ba53-89364175a52a\") " pod="openstack/horizon-68f4b57796-rhdnw" Feb 02 17:31:20 crc kubenswrapper[4858]: I0202 17:31:20.284552 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q72v\" (UniqueName: \"kubernetes.io/projected/4a208969-437b-449b-ba53-89364175a52a-kube-api-access-6q72v\") pod \"horizon-68f4b57796-rhdnw\" (UID: \"4a208969-437b-449b-ba53-89364175a52a\") " pod="openstack/horizon-68f4b57796-rhdnw" Feb 02 17:31:20 crc kubenswrapper[4858]: I0202 17:31:20.348657 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68f4b57796-rhdnw" Feb 02 17:31:21 crc kubenswrapper[4858]: I0202 17:31:21.514490 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-x2rtg" Feb 02 17:31:21 crc kubenswrapper[4858]: I0202 17:31:21.595831 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-txp59"] Feb 02 17:31:21 crc kubenswrapper[4858]: I0202 17:31:21.596098 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6bcbc87-txp59" podUID="1d9a40d6-2f6c-4c93-8191-d5dde87c136b" containerName="dnsmasq-dns" containerID="cri-o://1f18566c3943ba67d0f75f6297177d245130658c254368a67ee5bd06140ee0ae" gracePeriod=10 Feb 02 17:31:22 crc kubenswrapper[4858]: I0202 17:31:22.520789 4858 generic.go:334] "Generic (PLEG): container finished" podID="1d9a40d6-2f6c-4c93-8191-d5dde87c136b" containerID="1f18566c3943ba67d0f75f6297177d245130658c254368a67ee5bd06140ee0ae" exitCode=0 Feb 02 17:31:22 crc kubenswrapper[4858]: I0202 17:31:22.520908 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-txp59" event={"ID":"1d9a40d6-2f6c-4c93-8191-d5dde87c136b","Type":"ContainerDied","Data":"1f18566c3943ba67d0f75f6297177d245130658c254368a67ee5bd06140ee0ae"} Feb 02 17:31:23 crc kubenswrapper[4858]: I0202 17:31:23.507369 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-txp59" podUID="1d9a40d6-2f6c-4c93-8191-d5dde87c136b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.134:5353: connect: connection refused" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.185659 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.291887 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fcea235-9da5-4a12-a312-83243be556bd-public-tls-certs\") pod \"6fcea235-9da5-4a12-a312-83243be556bd\" (UID: \"6fcea235-9da5-4a12-a312-83243be556bd\") " Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.292030 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fcea235-9da5-4a12-a312-83243be556bd-config-data\") pod \"6fcea235-9da5-4a12-a312-83243be556bd\" (UID: \"6fcea235-9da5-4a12-a312-83243be556bd\") " Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.292090 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fcea235-9da5-4a12-a312-83243be556bd-scripts\") pod \"6fcea235-9da5-4a12-a312-83243be556bd\" (UID: \"6fcea235-9da5-4a12-a312-83243be556bd\") " Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.292116 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6fcea235-9da5-4a12-a312-83243be556bd-httpd-run\") pod \"6fcea235-9da5-4a12-a312-83243be556bd\" (UID: \"6fcea235-9da5-4a12-a312-83243be556bd\") " Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.292414 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fcea235-9da5-4a12-a312-83243be556bd-combined-ca-bundle\") pod \"6fcea235-9da5-4a12-a312-83243be556bd\" (UID: \"6fcea235-9da5-4a12-a312-83243be556bd\") " Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.292438 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fcea235-9da5-4a12-a312-83243be556bd-logs\") pod \"6fcea235-9da5-4a12-a312-83243be556bd\" (UID: \"6fcea235-9da5-4a12-a312-83243be556bd\") " Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.292461 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sp4gm\" (UniqueName: \"kubernetes.io/projected/6fcea235-9da5-4a12-a312-83243be556bd-kube-api-access-sp4gm\") pod \"6fcea235-9da5-4a12-a312-83243be556bd\" (UID: \"6fcea235-9da5-4a12-a312-83243be556bd\") " Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.292489 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"6fcea235-9da5-4a12-a312-83243be556bd\" (UID: \"6fcea235-9da5-4a12-a312-83243be556bd\") " Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.292666 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fcea235-9da5-4a12-a312-83243be556bd-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6fcea235-9da5-4a12-a312-83243be556bd" (UID: "6fcea235-9da5-4a12-a312-83243be556bd"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.292887 4858 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6fcea235-9da5-4a12-a312-83243be556bd-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.292913 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fcea235-9da5-4a12-a312-83243be556bd-logs" (OuterVolumeSpecName: "logs") pod "6fcea235-9da5-4a12-a312-83243be556bd" (UID: "6fcea235-9da5-4a12-a312-83243be556bd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.297887 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "6fcea235-9da5-4a12-a312-83243be556bd" (UID: "6fcea235-9da5-4a12-a312-83243be556bd"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.299200 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fcea235-9da5-4a12-a312-83243be556bd-kube-api-access-sp4gm" (OuterVolumeSpecName: "kube-api-access-sp4gm") pod "6fcea235-9da5-4a12-a312-83243be556bd" (UID: "6fcea235-9da5-4a12-a312-83243be556bd"). InnerVolumeSpecName "kube-api-access-sp4gm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.299892 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fcea235-9da5-4a12-a312-83243be556bd-scripts" (OuterVolumeSpecName: "scripts") pod "6fcea235-9da5-4a12-a312-83243be556bd" (UID: "6fcea235-9da5-4a12-a312-83243be556bd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.333485 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fcea235-9da5-4a12-a312-83243be556bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6fcea235-9da5-4a12-a312-83243be556bd" (UID: "6fcea235-9da5-4a12-a312-83243be556bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.342457 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fcea235-9da5-4a12-a312-83243be556bd-config-data" (OuterVolumeSpecName: "config-data") pod "6fcea235-9da5-4a12-a312-83243be556bd" (UID: "6fcea235-9da5-4a12-a312-83243be556bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.344208 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fcea235-9da5-4a12-a312-83243be556bd-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6fcea235-9da5-4a12-a312-83243be556bd" (UID: "6fcea235-9da5-4a12-a312-83243be556bd"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.395277 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fcea235-9da5-4a12-a312-83243be556bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.395319 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fcea235-9da5-4a12-a312-83243be556bd-logs\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.395578 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sp4gm\" (UniqueName: \"kubernetes.io/projected/6fcea235-9da5-4a12-a312-83243be556bd-kube-api-access-sp4gm\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.395620 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.395632 4858 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fcea235-9da5-4a12-a312-83243be556bd-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.395642 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fcea235-9da5-4a12-a312-83243be556bd-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.395656 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fcea235-9da5-4a12-a312-83243be556bd-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.415562 4858 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.498031 4858 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.558385 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6fcea235-9da5-4a12-a312-83243be556bd","Type":"ContainerDied","Data":"0a4e87065223244fed9be00309757298b6611b295add6fd7141cc4b51e5cbd2a"} Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.558431 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.558472 4858 scope.go:117] "RemoveContainer" containerID="b9d7667c1cf2892653de46b42b17ec3909cebd86dee445f51fd0551bc2697832" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.585856 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.606392 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.620160 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 17:31:26 crc kubenswrapper[4858]: E0202 17:31:26.620568 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fcea235-9da5-4a12-a312-83243be556bd" containerName="glance-httpd" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.620581 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fcea235-9da5-4a12-a312-83243be556bd" containerName="glance-httpd" Feb 02 17:31:26 crc kubenswrapper[4858]: E0202 17:31:26.620597 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fcea235-9da5-4a12-a312-83243be556bd" containerName="glance-log" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.620604 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fcea235-9da5-4a12-a312-83243be556bd" containerName="glance-log" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.620846 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fcea235-9da5-4a12-a312-83243be556bd" containerName="glance-httpd" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.620913 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fcea235-9da5-4a12-a312-83243be556bd" containerName="glance-log" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.622047 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.624390 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.624394 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.627214 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.803523 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/773eb3f6-6091-4a9b-8eeb-dc0ab8a20758-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.803571 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/773eb3f6-6091-4a9b-8eeb-dc0ab8a20758-scripts\") pod \"glance-default-external-api-0\" (UID: \"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.803604 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/773eb3f6-6091-4a9b-8eeb-dc0ab8a20758-logs\") pod \"glance-default-external-api-0\" (UID: \"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.803715 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/773eb3f6-6091-4a9b-8eeb-dc0ab8a20758-config-data\") pod \"glance-default-external-api-0\" (UID: \"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.803753 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/773eb3f6-6091-4a9b-8eeb-dc0ab8a20758-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.803790 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/773eb3f6-6091-4a9b-8eeb-dc0ab8a20758-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.803814 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q95xk\" (UniqueName: \"kubernetes.io/projected/773eb3f6-6091-4a9b-8eeb-dc0ab8a20758-kube-api-access-q95xk\") pod \"glance-default-external-api-0\" (UID: \"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.803843 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.905853 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/773eb3f6-6091-4a9b-8eeb-dc0ab8a20758-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.905911 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/773eb3f6-6091-4a9b-8eeb-dc0ab8a20758-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.905934 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q95xk\" (UniqueName: \"kubernetes.io/projected/773eb3f6-6091-4a9b-8eeb-dc0ab8a20758-kube-api-access-q95xk\") pod \"glance-default-external-api-0\" (UID: \"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.905961 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.906008 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/773eb3f6-6091-4a9b-8eeb-dc0ab8a20758-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.906031 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/773eb3f6-6091-4a9b-8eeb-dc0ab8a20758-scripts\") pod \"glance-default-external-api-0\" (UID: \"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.906059 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/773eb3f6-6091-4a9b-8eeb-dc0ab8a20758-logs\") pod \"glance-default-external-api-0\" (UID: \"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.906116 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/773eb3f6-6091-4a9b-8eeb-dc0ab8a20758-config-data\") pod \"glance-default-external-api-0\" (UID: \"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.906402 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/773eb3f6-6091-4a9b-8eeb-dc0ab8a20758-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.906827 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.906867 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/773eb3f6-6091-4a9b-8eeb-dc0ab8a20758-logs\") pod \"glance-default-external-api-0\" (UID: \"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.911161 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/773eb3f6-6091-4a9b-8eeb-dc0ab8a20758-config-data\") pod \"glance-default-external-api-0\" (UID: \"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.911392 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/773eb3f6-6091-4a9b-8eeb-dc0ab8a20758-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.912307 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/773eb3f6-6091-4a9b-8eeb-dc0ab8a20758-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.916275 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/773eb3f6-6091-4a9b-8eeb-dc0ab8a20758-scripts\") pod \"glance-default-external-api-0\" (UID: \"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.923836 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q95xk\" (UniqueName: \"kubernetes.io/projected/773eb3f6-6091-4a9b-8eeb-dc0ab8a20758-kube-api-access-q95xk\") pod \"glance-default-external-api-0\" (UID: \"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:26 crc kubenswrapper[4858]: I0202 17:31:26.943482 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758\") " pod="openstack/glance-default-external-api-0" Feb 02 17:31:27 crc kubenswrapper[4858]: I0202 17:31:27.244905 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 17:31:27 crc kubenswrapper[4858]: I0202 17:31:27.809880 4858 patch_prober.go:28] interesting pod/machine-config-daemon-lbvl2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 17:31:27 crc kubenswrapper[4858]: I0202 17:31:27.810286 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 17:31:28 crc kubenswrapper[4858]: I0202 17:31:28.411547 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fcea235-9da5-4a12-a312-83243be556bd" path="/var/lib/kubelet/pods/6fcea235-9da5-4a12-a312-83243be556bd/volumes" Feb 02 17:31:28 crc kubenswrapper[4858]: I0202 17:31:28.506297 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-txp59" podUID="1d9a40d6-2f6c-4c93-8191-d5dde87c136b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.134:5353: connect: connection refused" Feb 02 17:31:31 crc kubenswrapper[4858]: E0202 17:31:31.477411 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 02 17:31:31 crc kubenswrapper[4858]: E0202 17:31:31.477927 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n55bh599h64ch66chcchcfhfh689h566h687h55fh68fh8ch545h58hd6h88hd6hcdh54bh668h55bhf5hcch5b8h67dhbch55chffhb7hfh54dq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ffjkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-85955cfd75-8bjzt_openstack(dc9f8ef4-b053-45da-98da-c2050420fcc6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 17:31:31 crc kubenswrapper[4858]: E0202 17:31:31.480761 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-85955cfd75-8bjzt" podUID="dc9f8ef4-b053-45da-98da-c2050420fcc6" Feb 02 17:31:31 crc kubenswrapper[4858]: E0202 17:31:31.492904 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 02 17:31:31 crc kubenswrapper[4858]: E0202 17:31:31.493136 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n697h675h7h679h59bh569h76h5b4h558h669h68bh57ch679h585h5dfh5f6hf8h59ch4h685h54bh694hd4h5dfh665h698h8ch5f5hc9h58dh59fh5f6q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k2z72,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-799987499c-xpcx4_openstack(9f53e6ec-a84c-4b0f-bd1b-20acda3a9451): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 17:31:31 crc kubenswrapper[4858]: E0202 17:31:31.495368 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-799987499c-xpcx4" podUID="9f53e6ec-a84c-4b0f-bd1b-20acda3a9451" Feb 02 17:31:31 crc kubenswrapper[4858]: E0202 17:31:31.942298 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 02 17:31:31 crc kubenswrapper[4858]: E0202 17:31:31.943190 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59fh69hb5hc5h5fdh9bh8bh56dh674h5b4h659h5b7h58dh5f6h4h684hb8h59ch5b9h5fbh578h55dh666hcdh66fh597h5ch65ch687h5dch59chdbq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8s88n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(20f4c2e0-6bac-4c5a-affd-48f2d8301111): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 17:31:31 crc kubenswrapper[4858]: E0202 17:31:31.958468 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 02 17:31:31 crc kubenswrapper[4858]: E0202 17:31:31.958597 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n99h56h54h5b8h695h55bhdfh84h68ch68dhc7h56bh68chb7h554hc6h7dh579h696h8h58dh676h578h544h86h688hbbh64fh648h684hdbh9bq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bwsx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-9b7bf94f7-6x2q4_openstack(b64a755d-846e-4d73-9e25-8dbe3bb5c30f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 17:31:31 crc kubenswrapper[4858]: E0202 17:31:31.960882 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-9b7bf94f7-6x2q4" podUID="b64a755d-846e-4d73-9e25-8dbe3bb5c30f" Feb 02 17:31:32 crc kubenswrapper[4858]: I0202 17:31:32.039614 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-85955cfd75-8bjzt" Feb 02 17:31:32 crc kubenswrapper[4858]: I0202 17:31:32.044416 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2cblc" Feb 02 17:31:32 crc kubenswrapper[4858]: I0202 17:31:32.208912 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffjkq\" (UniqueName: \"kubernetes.io/projected/dc9f8ef4-b053-45da-98da-c2050420fcc6-kube-api-access-ffjkq\") pod \"dc9f8ef4-b053-45da-98da-c2050420fcc6\" (UID: \"dc9f8ef4-b053-45da-98da-c2050420fcc6\") " Feb 02 17:31:32 crc kubenswrapper[4858]: I0202 17:31:32.209002 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc9f8ef4-b053-45da-98da-c2050420fcc6-config-data\") pod \"dc9f8ef4-b053-45da-98da-c2050420fcc6\" (UID: \"dc9f8ef4-b053-45da-98da-c2050420fcc6\") " Feb 02 17:31:32 crc kubenswrapper[4858]: I0202 17:31:32.209052 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc9f8ef4-b053-45da-98da-c2050420fcc6-logs\") pod \"dc9f8ef4-b053-45da-98da-c2050420fcc6\" (UID: \"dc9f8ef4-b053-45da-98da-c2050420fcc6\") " Feb 02 17:31:32 crc kubenswrapper[4858]: I0202 17:31:32.209081 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9a10b005-fba9-454e-b8a8-e7ffa96fc978-credential-keys\") pod \"9a10b005-fba9-454e-b8a8-e7ffa96fc978\" (UID: \"9a10b005-fba9-454e-b8a8-e7ffa96fc978\") " Feb 02 17:31:32 crc kubenswrapper[4858]: I0202 17:31:32.209122 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dc9f8ef4-b053-45da-98da-c2050420fcc6-scripts\") pod \"dc9f8ef4-b053-45da-98da-c2050420fcc6\" (UID: \"dc9f8ef4-b053-45da-98da-c2050420fcc6\") " Feb 02 17:31:32 crc kubenswrapper[4858]: I0202 17:31:32.209206 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9a10b005-fba9-454e-b8a8-e7ffa96fc978-fernet-keys\") pod \"9a10b005-fba9-454e-b8a8-e7ffa96fc978\" (UID: \"9a10b005-fba9-454e-b8a8-e7ffa96fc978\") " Feb 02 17:31:32 crc kubenswrapper[4858]: I0202 17:31:32.209234 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a10b005-fba9-454e-b8a8-e7ffa96fc978-scripts\") pod \"9a10b005-fba9-454e-b8a8-e7ffa96fc978\" (UID: \"9a10b005-fba9-454e-b8a8-e7ffa96fc978\") " Feb 02 17:31:32 crc kubenswrapper[4858]: I0202 17:31:32.209279 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a10b005-fba9-454e-b8a8-e7ffa96fc978-combined-ca-bundle\") pod \"9a10b005-fba9-454e-b8a8-e7ffa96fc978\" (UID: \"9a10b005-fba9-454e-b8a8-e7ffa96fc978\") " Feb 02 17:31:32 crc kubenswrapper[4858]: I0202 17:31:32.209307 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n24h8\" (UniqueName: \"kubernetes.io/projected/9a10b005-fba9-454e-b8a8-e7ffa96fc978-kube-api-access-n24h8\") pod \"9a10b005-fba9-454e-b8a8-e7ffa96fc978\" (UID: \"9a10b005-fba9-454e-b8a8-e7ffa96fc978\") " Feb 02 17:31:32 crc kubenswrapper[4858]: I0202 17:31:32.209336 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dc9f8ef4-b053-45da-98da-c2050420fcc6-horizon-secret-key\") pod \"dc9f8ef4-b053-45da-98da-c2050420fcc6\" (UID: \"dc9f8ef4-b053-45da-98da-c2050420fcc6\") " Feb 02 17:31:32 crc kubenswrapper[4858]: I0202 17:31:32.209367 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a10b005-fba9-454e-b8a8-e7ffa96fc978-config-data\") pod \"9a10b005-fba9-454e-b8a8-e7ffa96fc978\" (UID: \"9a10b005-fba9-454e-b8a8-e7ffa96fc978\") " Feb 02 17:31:32 crc kubenswrapper[4858]: I0202 17:31:32.209512 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc9f8ef4-b053-45da-98da-c2050420fcc6-logs" (OuterVolumeSpecName: "logs") pod "dc9f8ef4-b053-45da-98da-c2050420fcc6" (UID: "dc9f8ef4-b053-45da-98da-c2050420fcc6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:31:32 crc kubenswrapper[4858]: I0202 17:31:32.209841 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc9f8ef4-b053-45da-98da-c2050420fcc6-logs\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:32 crc kubenswrapper[4858]: I0202 17:31:32.210332 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc9f8ef4-b053-45da-98da-c2050420fcc6-scripts" (OuterVolumeSpecName: "scripts") pod "dc9f8ef4-b053-45da-98da-c2050420fcc6" (UID: "dc9f8ef4-b053-45da-98da-c2050420fcc6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:31:32 crc kubenswrapper[4858]: I0202 17:31:32.210413 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc9f8ef4-b053-45da-98da-c2050420fcc6-config-data" (OuterVolumeSpecName: "config-data") pod "dc9f8ef4-b053-45da-98da-c2050420fcc6" (UID: "dc9f8ef4-b053-45da-98da-c2050420fcc6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:31:32 crc kubenswrapper[4858]: I0202 17:31:32.215574 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc9f8ef4-b053-45da-98da-c2050420fcc6-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "dc9f8ef4-b053-45da-98da-c2050420fcc6" (UID: "dc9f8ef4-b053-45da-98da-c2050420fcc6"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:31:32 crc kubenswrapper[4858]: I0202 17:31:32.216559 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a10b005-fba9-454e-b8a8-e7ffa96fc978-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "9a10b005-fba9-454e-b8a8-e7ffa96fc978" (UID: "9a10b005-fba9-454e-b8a8-e7ffa96fc978"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:31:32 crc kubenswrapper[4858]: I0202 17:31:32.217521 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a10b005-fba9-454e-b8a8-e7ffa96fc978-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9a10b005-fba9-454e-b8a8-e7ffa96fc978" (UID: "9a10b005-fba9-454e-b8a8-e7ffa96fc978"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:31:32 crc kubenswrapper[4858]: I0202 17:31:32.218794 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a10b005-fba9-454e-b8a8-e7ffa96fc978-scripts" (OuterVolumeSpecName: "scripts") pod "9a10b005-fba9-454e-b8a8-e7ffa96fc978" (UID: "9a10b005-fba9-454e-b8a8-e7ffa96fc978"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:31:32 crc kubenswrapper[4858]: I0202 17:31:32.220394 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a10b005-fba9-454e-b8a8-e7ffa96fc978-kube-api-access-n24h8" (OuterVolumeSpecName: "kube-api-access-n24h8") pod "9a10b005-fba9-454e-b8a8-e7ffa96fc978" (UID: "9a10b005-fba9-454e-b8a8-e7ffa96fc978"). InnerVolumeSpecName "kube-api-access-n24h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:31:32 crc kubenswrapper[4858]: I0202 17:31:32.221846 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc9f8ef4-b053-45da-98da-c2050420fcc6-kube-api-access-ffjkq" (OuterVolumeSpecName: "kube-api-access-ffjkq") pod "dc9f8ef4-b053-45da-98da-c2050420fcc6" (UID: "dc9f8ef4-b053-45da-98da-c2050420fcc6"). InnerVolumeSpecName "kube-api-access-ffjkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:31:32 crc kubenswrapper[4858]: I0202 17:31:32.251590 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a10b005-fba9-454e-b8a8-e7ffa96fc978-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a10b005-fba9-454e-b8a8-e7ffa96fc978" (UID: "9a10b005-fba9-454e-b8a8-e7ffa96fc978"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:31:32 crc kubenswrapper[4858]: I0202 17:31:32.273326 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a10b005-fba9-454e-b8a8-e7ffa96fc978-config-data" (OuterVolumeSpecName: "config-data") pod "9a10b005-fba9-454e-b8a8-e7ffa96fc978" (UID: "9a10b005-fba9-454e-b8a8-e7ffa96fc978"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:31:32 crc kubenswrapper[4858]: I0202 17:31:32.311456 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffjkq\" (UniqueName: \"kubernetes.io/projected/dc9f8ef4-b053-45da-98da-c2050420fcc6-kube-api-access-ffjkq\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:32 crc kubenswrapper[4858]: I0202 17:31:32.311693 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc9f8ef4-b053-45da-98da-c2050420fcc6-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:32 crc kubenswrapper[4858]: I0202 17:31:32.311704 4858 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9a10b005-fba9-454e-b8a8-e7ffa96fc978-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:32 crc kubenswrapper[4858]: I0202 17:31:32.311714 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dc9f8ef4-b053-45da-98da-c2050420fcc6-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:32 crc kubenswrapper[4858]: I0202 17:31:32.311722 4858 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9a10b005-fba9-454e-b8a8-e7ffa96fc978-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:32 crc kubenswrapper[4858]: I0202 17:31:32.311731 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a10b005-fba9-454e-b8a8-e7ffa96fc978-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:32 crc kubenswrapper[4858]: I0202 17:31:32.311738 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a10b005-fba9-454e-b8a8-e7ffa96fc978-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:32 crc kubenswrapper[4858]: I0202 17:31:32.311747 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n24h8\" (UniqueName: \"kubernetes.io/projected/9a10b005-fba9-454e-b8a8-e7ffa96fc978-kube-api-access-n24h8\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:32 crc kubenswrapper[4858]: I0202 17:31:32.311757 4858 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dc9f8ef4-b053-45da-98da-c2050420fcc6-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:32 crc kubenswrapper[4858]: I0202 17:31:32.311764 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a10b005-fba9-454e-b8a8-e7ffa96fc978-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:32 crc kubenswrapper[4858]: I0202 17:31:32.609042 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2cblc" event={"ID":"9a10b005-fba9-454e-b8a8-e7ffa96fc978","Type":"ContainerDied","Data":"5f171854a297a7e5f526a5203124f739c4e7832e73341cdd4452be0c487de97f"} Feb 02 17:31:32 crc kubenswrapper[4858]: I0202 17:31:32.609067 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2cblc" Feb 02 17:31:32 crc kubenswrapper[4858]: I0202 17:31:32.609085 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f171854a297a7e5f526a5203124f739c4e7832e73341cdd4452be0c487de97f" Feb 02 17:31:32 crc kubenswrapper[4858]: I0202 17:31:32.614473 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-85955cfd75-8bjzt" Feb 02 17:31:32 crc kubenswrapper[4858]: I0202 17:31:32.614791 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-85955cfd75-8bjzt" event={"ID":"dc9f8ef4-b053-45da-98da-c2050420fcc6","Type":"ContainerDied","Data":"f92d2cd1fe5542697f5ff52c8f88373e4fdebae3a3d117198968643b8bc015af"} Feb 02 17:31:32 crc kubenswrapper[4858]: I0202 17:31:32.700655 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-85955cfd75-8bjzt"] Feb 02 17:31:32 crc kubenswrapper[4858]: I0202 17:31:32.706878 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-85955cfd75-8bjzt"] Feb 02 17:31:33 crc kubenswrapper[4858]: I0202 17:31:33.253014 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-2cblc"] Feb 02 17:31:33 crc kubenswrapper[4858]: I0202 17:31:33.260608 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-2cblc"] Feb 02 17:31:33 crc kubenswrapper[4858]: I0202 17:31:33.358722 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-5d29t"] Feb 02 17:31:33 crc kubenswrapper[4858]: E0202 17:31:33.359139 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a10b005-fba9-454e-b8a8-e7ffa96fc978" containerName="keystone-bootstrap" Feb 02 17:31:33 crc kubenswrapper[4858]: I0202 17:31:33.359164 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a10b005-fba9-454e-b8a8-e7ffa96fc978" containerName="keystone-bootstrap" Feb 02 17:31:33 crc kubenswrapper[4858]: I0202 17:31:33.359379 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a10b005-fba9-454e-b8a8-e7ffa96fc978" containerName="keystone-bootstrap" Feb 02 17:31:33 crc kubenswrapper[4858]: I0202 17:31:33.360066 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5d29t" Feb 02 17:31:33 crc kubenswrapper[4858]: I0202 17:31:33.363028 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 17:31:33 crc kubenswrapper[4858]: I0202 17:31:33.363042 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 17:31:33 crc kubenswrapper[4858]: I0202 17:31:33.363111 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 02 17:31:33 crc kubenswrapper[4858]: I0202 17:31:33.363146 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 17:31:33 crc kubenswrapper[4858]: I0202 17:31:33.363151 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7ztbs" Feb 02 17:31:33 crc kubenswrapper[4858]: I0202 17:31:33.379278 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-5d29t"] Feb 02 17:31:33 crc kubenswrapper[4858]: I0202 17:31:33.534294 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/368436df-491c-4059-92f9-16993b192d76-config-data\") pod \"keystone-bootstrap-5d29t\" (UID: \"368436df-491c-4059-92f9-16993b192d76\") " pod="openstack/keystone-bootstrap-5d29t" Feb 02 17:31:33 crc kubenswrapper[4858]: I0202 17:31:33.534398 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/368436df-491c-4059-92f9-16993b192d76-fernet-keys\") pod \"keystone-bootstrap-5d29t\" (UID: \"368436df-491c-4059-92f9-16993b192d76\") " pod="openstack/keystone-bootstrap-5d29t" Feb 02 17:31:33 crc kubenswrapper[4858]: I0202 17:31:33.534479 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/368436df-491c-4059-92f9-16993b192d76-scripts\") pod \"keystone-bootstrap-5d29t\" (UID: \"368436df-491c-4059-92f9-16993b192d76\") " pod="openstack/keystone-bootstrap-5d29t" Feb 02 17:31:33 crc kubenswrapper[4858]: I0202 17:31:33.534507 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txq78\" (UniqueName: \"kubernetes.io/projected/368436df-491c-4059-92f9-16993b192d76-kube-api-access-txq78\") pod \"keystone-bootstrap-5d29t\" (UID: \"368436df-491c-4059-92f9-16993b192d76\") " pod="openstack/keystone-bootstrap-5d29t" Feb 02 17:31:33 crc kubenswrapper[4858]: I0202 17:31:33.534684 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/368436df-491c-4059-92f9-16993b192d76-credential-keys\") pod \"keystone-bootstrap-5d29t\" (UID: \"368436df-491c-4059-92f9-16993b192d76\") " pod="openstack/keystone-bootstrap-5d29t" Feb 02 17:31:33 crc kubenswrapper[4858]: I0202 17:31:33.534710 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/368436df-491c-4059-92f9-16993b192d76-combined-ca-bundle\") pod \"keystone-bootstrap-5d29t\" (UID: \"368436df-491c-4059-92f9-16993b192d76\") " pod="openstack/keystone-bootstrap-5d29t" Feb 02 17:31:33 crc kubenswrapper[4858]: I0202 17:31:33.637553 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/368436df-491c-4059-92f9-16993b192d76-scripts\") pod \"keystone-bootstrap-5d29t\" (UID: \"368436df-491c-4059-92f9-16993b192d76\") " pod="openstack/keystone-bootstrap-5d29t" Feb 02 17:31:33 crc kubenswrapper[4858]: I0202 17:31:33.638163 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txq78\" (UniqueName: \"kubernetes.io/projected/368436df-491c-4059-92f9-16993b192d76-kube-api-access-txq78\") pod \"keystone-bootstrap-5d29t\" (UID: \"368436df-491c-4059-92f9-16993b192d76\") " pod="openstack/keystone-bootstrap-5d29t" Feb 02 17:31:33 crc kubenswrapper[4858]: I0202 17:31:33.638236 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/368436df-491c-4059-92f9-16993b192d76-credential-keys\") pod \"keystone-bootstrap-5d29t\" (UID: \"368436df-491c-4059-92f9-16993b192d76\") " pod="openstack/keystone-bootstrap-5d29t" Feb 02 17:31:33 crc kubenswrapper[4858]: I0202 17:31:33.638264 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/368436df-491c-4059-92f9-16993b192d76-combined-ca-bundle\") pod \"keystone-bootstrap-5d29t\" (UID: \"368436df-491c-4059-92f9-16993b192d76\") " pod="openstack/keystone-bootstrap-5d29t" Feb 02 17:31:33 crc kubenswrapper[4858]: I0202 17:31:33.638322 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/368436df-491c-4059-92f9-16993b192d76-config-data\") pod \"keystone-bootstrap-5d29t\" (UID: \"368436df-491c-4059-92f9-16993b192d76\") " pod="openstack/keystone-bootstrap-5d29t" Feb 02 17:31:33 crc kubenswrapper[4858]: I0202 17:31:33.638381 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/368436df-491c-4059-92f9-16993b192d76-fernet-keys\") pod \"keystone-bootstrap-5d29t\" (UID: \"368436df-491c-4059-92f9-16993b192d76\") " pod="openstack/keystone-bootstrap-5d29t" Feb 02 17:31:33 crc kubenswrapper[4858]: I0202 17:31:33.646667 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/368436df-491c-4059-92f9-16993b192d76-fernet-keys\") pod \"keystone-bootstrap-5d29t\" (UID: \"368436df-491c-4059-92f9-16993b192d76\") " pod="openstack/keystone-bootstrap-5d29t" Feb 02 17:31:33 crc kubenswrapper[4858]: I0202 17:31:33.648409 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/368436df-491c-4059-92f9-16993b192d76-config-data\") pod \"keystone-bootstrap-5d29t\" (UID: \"368436df-491c-4059-92f9-16993b192d76\") " pod="openstack/keystone-bootstrap-5d29t" Feb 02 17:31:33 crc kubenswrapper[4858]: I0202 17:31:33.648511 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/368436df-491c-4059-92f9-16993b192d76-scripts\") pod \"keystone-bootstrap-5d29t\" (UID: \"368436df-491c-4059-92f9-16993b192d76\") " pod="openstack/keystone-bootstrap-5d29t" Feb 02 17:31:33 crc kubenswrapper[4858]: I0202 17:31:33.649039 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/368436df-491c-4059-92f9-16993b192d76-combined-ca-bundle\") pod \"keystone-bootstrap-5d29t\" (UID: \"368436df-491c-4059-92f9-16993b192d76\") " pod="openstack/keystone-bootstrap-5d29t" Feb 02 17:31:33 crc kubenswrapper[4858]: I0202 17:31:33.650065 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/368436df-491c-4059-92f9-16993b192d76-credential-keys\") pod \"keystone-bootstrap-5d29t\" (UID: \"368436df-491c-4059-92f9-16993b192d76\") " pod="openstack/keystone-bootstrap-5d29t" Feb 02 17:31:33 crc kubenswrapper[4858]: I0202 17:31:33.656892 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txq78\" (UniqueName: \"kubernetes.io/projected/368436df-491c-4059-92f9-16993b192d76-kube-api-access-txq78\") pod \"keystone-bootstrap-5d29t\" (UID: \"368436df-491c-4059-92f9-16993b192d76\") " pod="openstack/keystone-bootstrap-5d29t" Feb 02 17:31:33 crc kubenswrapper[4858]: I0202 17:31:33.690803 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5d29t" Feb 02 17:31:34 crc kubenswrapper[4858]: I0202 17:31:34.410488 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a10b005-fba9-454e-b8a8-e7ffa96fc978" path="/var/lib/kubelet/pods/9a10b005-fba9-454e-b8a8-e7ffa96fc978/volumes" Feb 02 17:31:34 crc kubenswrapper[4858]: I0202 17:31:34.411265 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc9f8ef4-b053-45da-98da-c2050420fcc6" path="/var/lib/kubelet/pods/dc9f8ef4-b053-45da-98da-c2050420fcc6/volumes" Feb 02 17:31:34 crc kubenswrapper[4858]: I0202 17:31:34.631730 4858 generic.go:334] "Generic (PLEG): container finished" podID="d5f00171-8005-4f58-a90a-5f0be6c6a48f" containerID="0dd7abadbe4ebe5d95892a5fcbf84962471de4be3deb29b052c9ba7afbec7b2c" exitCode=0 Feb 02 17:31:34 crc kubenswrapper[4858]: I0202 17:31:34.631789 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sgmhl" event={"ID":"d5f00171-8005-4f58-a90a-5f0be6c6a48f","Type":"ContainerDied","Data":"0dd7abadbe4ebe5d95892a5fcbf84962471de4be3deb29b052c9ba7afbec7b2c"} Feb 02 17:31:38 crc kubenswrapper[4858]: I0202 17:31:38.506762 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-txp59" podUID="1d9a40d6-2f6c-4c93-8191-d5dde87c136b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.134:5353: i/o timeout" Feb 02 17:31:38 crc kubenswrapper[4858]: I0202 17:31:38.507551 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-txp59" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.297244 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-799987499c-xpcx4" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.310521 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.310790 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-txp59" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.321699 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sgmhl" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.323363 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9b7bf94f7-6x2q4" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.460422 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d9a40d6-2f6c-4c93-8191-d5dde87c136b-ovsdbserver-nb\") pod \"1d9a40d6-2f6c-4c93-8191-d5dde87c136b\" (UID: \"1d9a40d6-2f6c-4c93-8191-d5dde87c136b\") " Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.460476 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b64a755d-846e-4d73-9e25-8dbe3bb5c30f-scripts\") pod \"b64a755d-846e-4d73-9e25-8dbe3bb5c30f\" (UID: \"b64a755d-846e-4d73-9e25-8dbe3bb5c30f\") " Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.460508 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d9a40d6-2f6c-4c93-8191-d5dde87c136b-dns-swift-storage-0\") pod \"1d9a40d6-2f6c-4c93-8191-d5dde87c136b\" (UID: \"1d9a40d6-2f6c-4c93-8191-d5dde87c136b\") " Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.460533 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b64a755d-846e-4d73-9e25-8dbe3bb5c30f-logs\") pod \"b64a755d-846e-4d73-9e25-8dbe3bb5c30f\" (UID: \"b64a755d-846e-4d73-9e25-8dbe3bb5c30f\") " Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.460553 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c004c88f-a1ab-46fd-bb84-65dc1d7790c0-combined-ca-bundle\") pod \"c004c88f-a1ab-46fd-bb84-65dc1d7790c0\" (UID: \"c004c88f-a1ab-46fd-bb84-65dc1d7790c0\") " Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.460607 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9f53e6ec-a84c-4b0f-bd1b-20acda3a9451-scripts\") pod \"9f53e6ec-a84c-4b0f-bd1b-20acda3a9451\" (UID: \"9f53e6ec-a84c-4b0f-bd1b-20acda3a9451\") " Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.460639 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zlfg\" (UniqueName: \"kubernetes.io/projected/1d9a40d6-2f6c-4c93-8191-d5dde87c136b-kube-api-access-8zlfg\") pod \"1d9a40d6-2f6c-4c93-8191-d5dde87c136b\" (UID: \"1d9a40d6-2f6c-4c93-8191-d5dde87c136b\") " Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.460673 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f53e6ec-a84c-4b0f-bd1b-20acda3a9451-logs\") pod \"9f53e6ec-a84c-4b0f-bd1b-20acda3a9451\" (UID: \"9f53e6ec-a84c-4b0f-bd1b-20acda3a9451\") " Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.460702 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"c004c88f-a1ab-46fd-bb84-65dc1d7790c0\" (UID: \"c004c88f-a1ab-46fd-bb84-65dc1d7790c0\") " Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.460741 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d5f00171-8005-4f58-a90a-5f0be6c6a48f-config\") pod \"d5f00171-8005-4f58-a90a-5f0be6c6a48f\" (UID: \"d5f00171-8005-4f58-a90a-5f0be6c6a48f\") " Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.460761 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c004c88f-a1ab-46fd-bb84-65dc1d7790c0-scripts\") pod \"c004c88f-a1ab-46fd-bb84-65dc1d7790c0\" (UID: \"c004c88f-a1ab-46fd-bb84-65dc1d7790c0\") " Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.460783 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d9a40d6-2f6c-4c93-8191-d5dde87c136b-config\") pod \"1d9a40d6-2f6c-4c93-8191-d5dde87c136b\" (UID: \"1d9a40d6-2f6c-4c93-8191-d5dde87c136b\") " Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.460862 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9f53e6ec-a84c-4b0f-bd1b-20acda3a9451-horizon-secret-key\") pod \"9f53e6ec-a84c-4b0f-bd1b-20acda3a9451\" (UID: \"9f53e6ec-a84c-4b0f-bd1b-20acda3a9451\") " Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.460916 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c004c88f-a1ab-46fd-bb84-65dc1d7790c0-internal-tls-certs\") pod \"c004c88f-a1ab-46fd-bb84-65dc1d7790c0\" (UID: \"c004c88f-a1ab-46fd-bb84-65dc1d7790c0\") " Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.460947 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f00171-8005-4f58-a90a-5f0be6c6a48f-combined-ca-bundle\") pod \"d5f00171-8005-4f58-a90a-5f0be6c6a48f\" (UID: \"d5f00171-8005-4f58-a90a-5f0be6c6a48f\") " Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.460988 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ds67x\" (UniqueName: \"kubernetes.io/projected/c004c88f-a1ab-46fd-bb84-65dc1d7790c0-kube-api-access-ds67x\") pod \"c004c88f-a1ab-46fd-bb84-65dc1d7790c0\" (UID: \"c004c88f-a1ab-46fd-bb84-65dc1d7790c0\") " Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.461021 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c004c88f-a1ab-46fd-bb84-65dc1d7790c0-httpd-run\") pod \"c004c88f-a1ab-46fd-bb84-65dc1d7790c0\" (UID: \"c004c88f-a1ab-46fd-bb84-65dc1d7790c0\") " Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.461015 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b64a755d-846e-4d73-9e25-8dbe3bb5c30f-scripts" (OuterVolumeSpecName: "scripts") pod "b64a755d-846e-4d73-9e25-8dbe3bb5c30f" (UID: "b64a755d-846e-4d73-9e25-8dbe3bb5c30f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.461051 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c004c88f-a1ab-46fd-bb84-65dc1d7790c0-logs\") pod \"c004c88f-a1ab-46fd-bb84-65dc1d7790c0\" (UID: \"c004c88f-a1ab-46fd-bb84-65dc1d7790c0\") " Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.461080 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d9a40d6-2f6c-4c93-8191-d5dde87c136b-dns-svc\") pod \"1d9a40d6-2f6c-4c93-8191-d5dde87c136b\" (UID: \"1d9a40d6-2f6c-4c93-8191-d5dde87c136b\") " Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.461080 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b64a755d-846e-4d73-9e25-8dbe3bb5c30f-logs" (OuterVolumeSpecName: "logs") pod "b64a755d-846e-4d73-9e25-8dbe3bb5c30f" (UID: "b64a755d-846e-4d73-9e25-8dbe3bb5c30f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.461104 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d9a40d6-2f6c-4c93-8191-d5dde87c136b-ovsdbserver-sb\") pod \"1d9a40d6-2f6c-4c93-8191-d5dde87c136b\" (UID: \"1d9a40d6-2f6c-4c93-8191-d5dde87c136b\") " Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.461176 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b64a755d-846e-4d73-9e25-8dbe3bb5c30f-config-data\") pod \"b64a755d-846e-4d73-9e25-8dbe3bb5c30f\" (UID: \"b64a755d-846e-4d73-9e25-8dbe3bb5c30f\") " Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.461215 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9f53e6ec-a84c-4b0f-bd1b-20acda3a9451-config-data\") pod \"9f53e6ec-a84c-4b0f-bd1b-20acda3a9451\" (UID: \"9f53e6ec-a84c-4b0f-bd1b-20acda3a9451\") " Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.461241 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwsx5\" (UniqueName: \"kubernetes.io/projected/b64a755d-846e-4d73-9e25-8dbe3bb5c30f-kube-api-access-bwsx5\") pod \"b64a755d-846e-4d73-9e25-8dbe3bb5c30f\" (UID: \"b64a755d-846e-4d73-9e25-8dbe3bb5c30f\") " Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.461272 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b64a755d-846e-4d73-9e25-8dbe3bb5c30f-horizon-secret-key\") pod \"b64a755d-846e-4d73-9e25-8dbe3bb5c30f\" (UID: \"b64a755d-846e-4d73-9e25-8dbe3bb5c30f\") " Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.461308 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2z72\" (UniqueName: \"kubernetes.io/projected/9f53e6ec-a84c-4b0f-bd1b-20acda3a9451-kube-api-access-k2z72\") pod \"9f53e6ec-a84c-4b0f-bd1b-20acda3a9451\" (UID: \"9f53e6ec-a84c-4b0f-bd1b-20acda3a9451\") " Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.461331 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c004c88f-a1ab-46fd-bb84-65dc1d7790c0-config-data\") pod \"c004c88f-a1ab-46fd-bb84-65dc1d7790c0\" (UID: \"c004c88f-a1ab-46fd-bb84-65dc1d7790c0\") " Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.461364 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmzjf\" (UniqueName: \"kubernetes.io/projected/d5f00171-8005-4f58-a90a-5f0be6c6a48f-kube-api-access-hmzjf\") pod \"d5f00171-8005-4f58-a90a-5f0be6c6a48f\" (UID: \"d5f00171-8005-4f58-a90a-5f0be6c6a48f\") " Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.461678 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f53e6ec-a84c-4b0f-bd1b-20acda3a9451-logs" (OuterVolumeSpecName: "logs") pod "9f53e6ec-a84c-4b0f-bd1b-20acda3a9451" (UID: "9f53e6ec-a84c-4b0f-bd1b-20acda3a9451"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.461879 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f53e6ec-a84c-4b0f-bd1b-20acda3a9451-scripts" (OuterVolumeSpecName: "scripts") pod "9f53e6ec-a84c-4b0f-bd1b-20acda3a9451" (UID: "9f53e6ec-a84c-4b0f-bd1b-20acda3a9451"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.462232 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b64a755d-846e-4d73-9e25-8dbe3bb5c30f-config-data" (OuterVolumeSpecName: "config-data") pod "b64a755d-846e-4d73-9e25-8dbe3bb5c30f" (UID: "b64a755d-846e-4d73-9e25-8dbe3bb5c30f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.462847 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c004c88f-a1ab-46fd-bb84-65dc1d7790c0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c004c88f-a1ab-46fd-bb84-65dc1d7790c0" (UID: "c004c88f-a1ab-46fd-bb84-65dc1d7790c0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.462890 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b64a755d-846e-4d73-9e25-8dbe3bb5c30f-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.462912 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b64a755d-846e-4d73-9e25-8dbe3bb5c30f-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.462925 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b64a755d-846e-4d73-9e25-8dbe3bb5c30f-logs\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.462936 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9f53e6ec-a84c-4b0f-bd1b-20acda3a9451-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.462946 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f53e6ec-a84c-4b0f-bd1b-20acda3a9451-logs\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.464956 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d9a40d6-2f6c-4c93-8191-d5dde87c136b-kube-api-access-8zlfg" (OuterVolumeSpecName: "kube-api-access-8zlfg") pod "1d9a40d6-2f6c-4c93-8191-d5dde87c136b" (UID: "1d9a40d6-2f6c-4c93-8191-d5dde87c136b"). InnerVolumeSpecName "kube-api-access-8zlfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.467058 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "c004c88f-a1ab-46fd-bb84-65dc1d7790c0" (UID: "c004c88f-a1ab-46fd-bb84-65dc1d7790c0"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.470715 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c004c88f-a1ab-46fd-bb84-65dc1d7790c0-logs" (OuterVolumeSpecName: "logs") pod "c004c88f-a1ab-46fd-bb84-65dc1d7790c0" (UID: "c004c88f-a1ab-46fd-bb84-65dc1d7790c0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.472020 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f53e6ec-a84c-4b0f-bd1b-20acda3a9451-config-data" (OuterVolumeSpecName: "config-data") pod "9f53e6ec-a84c-4b0f-bd1b-20acda3a9451" (UID: "9f53e6ec-a84c-4b0f-bd1b-20acda3a9451"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.473124 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b64a755d-846e-4d73-9e25-8dbe3bb5c30f-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "b64a755d-846e-4d73-9e25-8dbe3bb5c30f" (UID: "b64a755d-846e-4d73-9e25-8dbe3bb5c30f"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.475017 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f53e6ec-a84c-4b0f-bd1b-20acda3a9451-kube-api-access-k2z72" (OuterVolumeSpecName: "kube-api-access-k2z72") pod "9f53e6ec-a84c-4b0f-bd1b-20acda3a9451" (UID: "9f53e6ec-a84c-4b0f-bd1b-20acda3a9451"). InnerVolumeSpecName "kube-api-access-k2z72". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.476822 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c004c88f-a1ab-46fd-bb84-65dc1d7790c0-kube-api-access-ds67x" (OuterVolumeSpecName: "kube-api-access-ds67x") pod "c004c88f-a1ab-46fd-bb84-65dc1d7790c0" (UID: "c004c88f-a1ab-46fd-bb84-65dc1d7790c0"). InnerVolumeSpecName "kube-api-access-ds67x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.477793 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5f00171-8005-4f58-a90a-5f0be6c6a48f-kube-api-access-hmzjf" (OuterVolumeSpecName: "kube-api-access-hmzjf") pod "d5f00171-8005-4f58-a90a-5f0be6c6a48f" (UID: "d5f00171-8005-4f58-a90a-5f0be6c6a48f"). InnerVolumeSpecName "kube-api-access-hmzjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.488002 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b64a755d-846e-4d73-9e25-8dbe3bb5c30f-kube-api-access-bwsx5" (OuterVolumeSpecName: "kube-api-access-bwsx5") pod "b64a755d-846e-4d73-9e25-8dbe3bb5c30f" (UID: "b64a755d-846e-4d73-9e25-8dbe3bb5c30f"). InnerVolumeSpecName "kube-api-access-bwsx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.494815 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f53e6ec-a84c-4b0f-bd1b-20acda3a9451-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "9f53e6ec-a84c-4b0f-bd1b-20acda3a9451" (UID: "9f53e6ec-a84c-4b0f-bd1b-20acda3a9451"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.494731 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c004c88f-a1ab-46fd-bb84-65dc1d7790c0-scripts" (OuterVolumeSpecName: "scripts") pod "c004c88f-a1ab-46fd-bb84-65dc1d7790c0" (UID: "c004c88f-a1ab-46fd-bb84-65dc1d7790c0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.510561 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5f00171-8005-4f58-a90a-5f0be6c6a48f-config" (OuterVolumeSpecName: "config") pod "d5f00171-8005-4f58-a90a-5f0be6c6a48f" (UID: "d5f00171-8005-4f58-a90a-5f0be6c6a48f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.517437 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c004c88f-a1ab-46fd-bb84-65dc1d7790c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c004c88f-a1ab-46fd-bb84-65dc1d7790c0" (UID: "c004c88f-a1ab-46fd-bb84-65dc1d7790c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.524763 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5f00171-8005-4f58-a90a-5f0be6c6a48f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5f00171-8005-4f58-a90a-5f0be6c6a48f" (UID: "d5f00171-8005-4f58-a90a-5f0be6c6a48f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.536818 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d9a40d6-2f6c-4c93-8191-d5dde87c136b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1d9a40d6-2f6c-4c93-8191-d5dde87c136b" (UID: "1d9a40d6-2f6c-4c93-8191-d5dde87c136b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.546876 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d9a40d6-2f6c-4c93-8191-d5dde87c136b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1d9a40d6-2f6c-4c93-8191-d5dde87c136b" (UID: "1d9a40d6-2f6c-4c93-8191-d5dde87c136b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.551806 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d9a40d6-2f6c-4c93-8191-d5dde87c136b-config" (OuterVolumeSpecName: "config") pod "1d9a40d6-2f6c-4c93-8191-d5dde87c136b" (UID: "1d9a40d6-2f6c-4c93-8191-d5dde87c136b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.564388 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f00171-8005-4f58-a90a-5f0be6c6a48f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.564424 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ds67x\" (UniqueName: \"kubernetes.io/projected/c004c88f-a1ab-46fd-bb84-65dc1d7790c0-kube-api-access-ds67x\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.564439 4858 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c004c88f-a1ab-46fd-bb84-65dc1d7790c0-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.564450 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c004c88f-a1ab-46fd-bb84-65dc1d7790c0-logs\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.564464 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9f53e6ec-a84c-4b0f-bd1b-20acda3a9451-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.564457 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c004c88f-a1ab-46fd-bb84-65dc1d7790c0-config-data" (OuterVolumeSpecName: "config-data") pod "c004c88f-a1ab-46fd-bb84-65dc1d7790c0" (UID: "c004c88f-a1ab-46fd-bb84-65dc1d7790c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.564474 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwsx5\" (UniqueName: \"kubernetes.io/projected/b64a755d-846e-4d73-9e25-8dbe3bb5c30f-kube-api-access-bwsx5\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.564488 4858 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b64a755d-846e-4d73-9e25-8dbe3bb5c30f-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.564499 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2z72\" (UniqueName: \"kubernetes.io/projected/9f53e6ec-a84c-4b0f-bd1b-20acda3a9451-kube-api-access-k2z72\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.564511 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmzjf\" (UniqueName: \"kubernetes.io/projected/d5f00171-8005-4f58-a90a-5f0be6c6a48f-kube-api-access-hmzjf\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.564597 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d9a40d6-2f6c-4c93-8191-d5dde87c136b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.565444 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d9a40d6-2f6c-4c93-8191-d5dde87c136b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.565468 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c004c88f-a1ab-46fd-bb84-65dc1d7790c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.565482 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zlfg\" (UniqueName: \"kubernetes.io/projected/1d9a40d6-2f6c-4c93-8191-d5dde87c136b-kube-api-access-8zlfg\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.565506 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.565515 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d5f00171-8005-4f58-a90a-5f0be6c6a48f-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.565523 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c004c88f-a1ab-46fd-bb84-65dc1d7790c0-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.565533 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d9a40d6-2f6c-4c93-8191-d5dde87c136b-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.565543 4858 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9f53e6ec-a84c-4b0f-bd1b-20acda3a9451-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.568596 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c004c88f-a1ab-46fd-bb84-65dc1d7790c0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c004c88f-a1ab-46fd-bb84-65dc1d7790c0" (UID: "c004c88f-a1ab-46fd-bb84-65dc1d7790c0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.570908 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d9a40d6-2f6c-4c93-8191-d5dde87c136b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1d9a40d6-2f6c-4c93-8191-d5dde87c136b" (UID: "1d9a40d6-2f6c-4c93-8191-d5dde87c136b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.575437 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d9a40d6-2f6c-4c93-8191-d5dde87c136b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1d9a40d6-2f6c-4c93-8191-d5dde87c136b" (UID: "1d9a40d6-2f6c-4c93-8191-d5dde87c136b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.585544 4858 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.666899 4858 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c004c88f-a1ab-46fd-bb84-65dc1d7790c0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.666956 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d9a40d6-2f6c-4c93-8191-d5dde87c136b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.666966 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d9a40d6-2f6c-4c93-8191-d5dde87c136b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.667000 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c004c88f-a1ab-46fd-bb84-65dc1d7790c0-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.667010 4858 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.678258 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-799987499c-xpcx4" event={"ID":"9f53e6ec-a84c-4b0f-bd1b-20acda3a9451","Type":"ContainerDied","Data":"8e328a67870e07c310ef6053329233411fed8c4be7f5eef890a95a3b8bf02601"} Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.678697 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-799987499c-xpcx4" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.692243 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-txp59" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.692242 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-txp59" event={"ID":"1d9a40d6-2f6c-4c93-8191-d5dde87c136b","Type":"ContainerDied","Data":"b936bfa9cd3ff3ae60a033d8e9f16337eae9bba04c2f32528a7a4cdb41b0cf90"} Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.694062 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c004c88f-a1ab-46fd-bb84-65dc1d7790c0","Type":"ContainerDied","Data":"db55197a4a6b1b6a09b17f52caeae6e30a9829564909a22e945110d73d590963"} Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.694154 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.695700 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9b7bf94f7-6x2q4" event={"ID":"b64a755d-846e-4d73-9e25-8dbe3bb5c30f","Type":"ContainerDied","Data":"8a49043e05c16d6c103a3f3128f6ff118e146c9ba0291651376ba62680a861b0"} Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.695841 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9b7bf94f7-6x2q4" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.698999 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sgmhl" event={"ID":"d5f00171-8005-4f58-a90a-5f0be6c6a48f","Type":"ContainerDied","Data":"4672551d0726eb625825b03a3cfd37825f953841025762e2ec827cf1bb928e7f"} Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.699049 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4672551d0726eb625825b03a3cfd37825f953841025762e2ec827cf1bb928e7f" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.699074 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sgmhl" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.759523 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-799987499c-xpcx4"] Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.777775 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-799987499c-xpcx4"] Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.790702 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-txp59"] Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.807177 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-txp59"] Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.816176 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.823379 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.862553 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-9b7bf94f7-6x2q4"] Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.862608 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-9b7bf94f7-6x2q4"] Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.862628 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 17:31:39 crc kubenswrapper[4858]: E0202 17:31:39.863096 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d9a40d6-2f6c-4c93-8191-d5dde87c136b" containerName="init" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.863111 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d9a40d6-2f6c-4c93-8191-d5dde87c136b" containerName="init" Feb 02 17:31:39 crc kubenswrapper[4858]: E0202 17:31:39.863130 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c004c88f-a1ab-46fd-bb84-65dc1d7790c0" containerName="glance-httpd" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.863137 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c004c88f-a1ab-46fd-bb84-65dc1d7790c0" containerName="glance-httpd" Feb 02 17:31:39 crc kubenswrapper[4858]: E0202 17:31:39.863147 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d9a40d6-2f6c-4c93-8191-d5dde87c136b" containerName="dnsmasq-dns" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.863152 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d9a40d6-2f6c-4c93-8191-d5dde87c136b" containerName="dnsmasq-dns" Feb 02 17:31:39 crc kubenswrapper[4858]: E0202 17:31:39.863168 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c004c88f-a1ab-46fd-bb84-65dc1d7790c0" containerName="glance-log" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.863173 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c004c88f-a1ab-46fd-bb84-65dc1d7790c0" containerName="glance-log" Feb 02 17:31:39 crc kubenswrapper[4858]: E0202 17:31:39.863182 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5f00171-8005-4f58-a90a-5f0be6c6a48f" containerName="neutron-db-sync" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.863188 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5f00171-8005-4f58-a90a-5f0be6c6a48f" containerName="neutron-db-sync" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.863346 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d9a40d6-2f6c-4c93-8191-d5dde87c136b" containerName="dnsmasq-dns" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.863363 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5f00171-8005-4f58-a90a-5f0be6c6a48f" containerName="neutron-db-sync" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.863373 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="c004c88f-a1ab-46fd-bb84-65dc1d7790c0" containerName="glance-log" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.863380 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="c004c88f-a1ab-46fd-bb84-65dc1d7790c0" containerName="glance-httpd" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.864323 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.867509 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.868193 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.872265 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.975773 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11d650f7-3342-41ec-b78a-0f9cbbac4368-config-data\") pod \"glance-default-internal-api-0\" (UID: \"11d650f7-3342-41ec-b78a-0f9cbbac4368\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.976430 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/11d650f7-3342-41ec-b78a-0f9cbbac4368-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"11d650f7-3342-41ec-b78a-0f9cbbac4368\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.976555 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv66f\" (UniqueName: \"kubernetes.io/projected/11d650f7-3342-41ec-b78a-0f9cbbac4368-kube-api-access-rv66f\") pod \"glance-default-internal-api-0\" (UID: \"11d650f7-3342-41ec-b78a-0f9cbbac4368\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.976614 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/11d650f7-3342-41ec-b78a-0f9cbbac4368-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"11d650f7-3342-41ec-b78a-0f9cbbac4368\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.976664 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11d650f7-3342-41ec-b78a-0f9cbbac4368-logs\") pod \"glance-default-internal-api-0\" (UID: \"11d650f7-3342-41ec-b78a-0f9cbbac4368\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.976729 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"11d650f7-3342-41ec-b78a-0f9cbbac4368\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.976890 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11d650f7-3342-41ec-b78a-0f9cbbac4368-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"11d650f7-3342-41ec-b78a-0f9cbbac4368\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:39 crc kubenswrapper[4858]: I0202 17:31:39.977032 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11d650f7-3342-41ec-b78a-0f9cbbac4368-scripts\") pod \"glance-default-internal-api-0\" (UID: \"11d650f7-3342-41ec-b78a-0f9cbbac4368\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.078690 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/11d650f7-3342-41ec-b78a-0f9cbbac4368-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"11d650f7-3342-41ec-b78a-0f9cbbac4368\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.078742 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11d650f7-3342-41ec-b78a-0f9cbbac4368-logs\") pod \"glance-default-internal-api-0\" (UID: \"11d650f7-3342-41ec-b78a-0f9cbbac4368\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.078760 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"11d650f7-3342-41ec-b78a-0f9cbbac4368\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.078857 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11d650f7-3342-41ec-b78a-0f9cbbac4368-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"11d650f7-3342-41ec-b78a-0f9cbbac4368\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.078888 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11d650f7-3342-41ec-b78a-0f9cbbac4368-scripts\") pod \"glance-default-internal-api-0\" (UID: \"11d650f7-3342-41ec-b78a-0f9cbbac4368\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.078925 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11d650f7-3342-41ec-b78a-0f9cbbac4368-config-data\") pod \"glance-default-internal-api-0\" (UID: \"11d650f7-3342-41ec-b78a-0f9cbbac4368\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.078949 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/11d650f7-3342-41ec-b78a-0f9cbbac4368-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"11d650f7-3342-41ec-b78a-0f9cbbac4368\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.079055 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv66f\" (UniqueName: \"kubernetes.io/projected/11d650f7-3342-41ec-b78a-0f9cbbac4368-kube-api-access-rv66f\") pod \"glance-default-internal-api-0\" (UID: \"11d650f7-3342-41ec-b78a-0f9cbbac4368\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.079350 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/11d650f7-3342-41ec-b78a-0f9cbbac4368-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"11d650f7-3342-41ec-b78a-0f9cbbac4368\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.079383 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"11d650f7-3342-41ec-b78a-0f9cbbac4368\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.079861 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11d650f7-3342-41ec-b78a-0f9cbbac4368-logs\") pod \"glance-default-internal-api-0\" (UID: \"11d650f7-3342-41ec-b78a-0f9cbbac4368\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.083046 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/11d650f7-3342-41ec-b78a-0f9cbbac4368-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"11d650f7-3342-41ec-b78a-0f9cbbac4368\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.083411 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11d650f7-3342-41ec-b78a-0f9cbbac4368-scripts\") pod \"glance-default-internal-api-0\" (UID: \"11d650f7-3342-41ec-b78a-0f9cbbac4368\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.083868 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11d650f7-3342-41ec-b78a-0f9cbbac4368-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"11d650f7-3342-41ec-b78a-0f9cbbac4368\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.085196 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11d650f7-3342-41ec-b78a-0f9cbbac4368-config-data\") pod \"glance-default-internal-api-0\" (UID: \"11d650f7-3342-41ec-b78a-0f9cbbac4368\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.095832 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv66f\" (UniqueName: \"kubernetes.io/projected/11d650f7-3342-41ec-b78a-0f9cbbac4368-kube-api-access-rv66f\") pod \"glance-default-internal-api-0\" (UID: \"11d650f7-3342-41ec-b78a-0f9cbbac4368\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.102991 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"11d650f7-3342-41ec-b78a-0f9cbbac4368\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.249054 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.415304 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d9a40d6-2f6c-4c93-8191-d5dde87c136b" path="/var/lib/kubelet/pods/1d9a40d6-2f6c-4c93-8191-d5dde87c136b/volumes" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.416604 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f53e6ec-a84c-4b0f-bd1b-20acda3a9451" path="/var/lib/kubelet/pods/9f53e6ec-a84c-4b0f-bd1b-20acda3a9451/volumes" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.417174 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b64a755d-846e-4d73-9e25-8dbe3bb5c30f" path="/var/lib/kubelet/pods/b64a755d-846e-4d73-9e25-8dbe3bb5c30f/volumes" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.417694 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c004c88f-a1ab-46fd-bb84-65dc1d7790c0" path="/var/lib/kubelet/pods/c004c88f-a1ab-46fd-bb84-65dc1d7790c0/volumes" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.579051 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-nz69z"] Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.580795 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-nz69z" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.614519 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-nz69z"] Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.690330 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98af866c-3b91-4a5a-9c15-681572dbd5de-dns-svc\") pod \"dnsmasq-dns-55f844cf75-nz69z\" (UID: \"98af866c-3b91-4a5a-9c15-681572dbd5de\") " pod="openstack/dnsmasq-dns-55f844cf75-nz69z" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.690381 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98af866c-3b91-4a5a-9c15-681572dbd5de-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-nz69z\" (UID: \"98af866c-3b91-4a5a-9c15-681572dbd5de\") " pod="openstack/dnsmasq-dns-55f844cf75-nz69z" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.690426 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98af866c-3b91-4a5a-9c15-681572dbd5de-config\") pod \"dnsmasq-dns-55f844cf75-nz69z\" (UID: \"98af866c-3b91-4a5a-9c15-681572dbd5de\") " pod="openstack/dnsmasq-dns-55f844cf75-nz69z" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.690500 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98af866c-3b91-4a5a-9c15-681572dbd5de-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-nz69z\" (UID: \"98af866c-3b91-4a5a-9c15-681572dbd5de\") " pod="openstack/dnsmasq-dns-55f844cf75-nz69z" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.690528 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98af866c-3b91-4a5a-9c15-681572dbd5de-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-nz69z\" (UID: \"98af866c-3b91-4a5a-9c15-681572dbd5de\") " pod="openstack/dnsmasq-dns-55f844cf75-nz69z" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.690564 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j226h\" (UniqueName: \"kubernetes.io/projected/98af866c-3b91-4a5a-9c15-681572dbd5de-kube-api-access-j226h\") pod \"dnsmasq-dns-55f844cf75-nz69z\" (UID: \"98af866c-3b91-4a5a-9c15-681572dbd5de\") " pod="openstack/dnsmasq-dns-55f844cf75-nz69z" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.693705 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-78bb7f4c66-lspk6"] Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.695068 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-78bb7f4c66-lspk6" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.708497 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.708834 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.708928 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.709092 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-5s7k7" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.723787 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-78bb7f4c66-lspk6"] Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.792038 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f84b369-07ee-4a29-8f3b-be71b0e37772-ovndb-tls-certs\") pod \"neutron-78bb7f4c66-lspk6\" (UID: \"1f84b369-07ee-4a29-8f3b-be71b0e37772\") " pod="openstack/neutron-78bb7f4c66-lspk6" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.792230 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98af866c-3b91-4a5a-9c15-681572dbd5de-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-nz69z\" (UID: \"98af866c-3b91-4a5a-9c15-681572dbd5de\") " pod="openstack/dnsmasq-dns-55f844cf75-nz69z" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.792284 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98af866c-3b91-4a5a-9c15-681572dbd5de-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-nz69z\" (UID: \"98af866c-3b91-4a5a-9c15-681572dbd5de\") " pod="openstack/dnsmasq-dns-55f844cf75-nz69z" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.792344 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j226h\" (UniqueName: \"kubernetes.io/projected/98af866c-3b91-4a5a-9c15-681572dbd5de-kube-api-access-j226h\") pod \"dnsmasq-dns-55f844cf75-nz69z\" (UID: \"98af866c-3b91-4a5a-9c15-681572dbd5de\") " pod="openstack/dnsmasq-dns-55f844cf75-nz69z" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.792452 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9kqk\" (UniqueName: \"kubernetes.io/projected/1f84b369-07ee-4a29-8f3b-be71b0e37772-kube-api-access-t9kqk\") pod \"neutron-78bb7f4c66-lspk6\" (UID: \"1f84b369-07ee-4a29-8f3b-be71b0e37772\") " pod="openstack/neutron-78bb7f4c66-lspk6" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.792476 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1f84b369-07ee-4a29-8f3b-be71b0e37772-config\") pod \"neutron-78bb7f4c66-lspk6\" (UID: \"1f84b369-07ee-4a29-8f3b-be71b0e37772\") " pod="openstack/neutron-78bb7f4c66-lspk6" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.792567 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1f84b369-07ee-4a29-8f3b-be71b0e37772-httpd-config\") pod \"neutron-78bb7f4c66-lspk6\" (UID: \"1f84b369-07ee-4a29-8f3b-be71b0e37772\") " pod="openstack/neutron-78bb7f4c66-lspk6" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.792589 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98af866c-3b91-4a5a-9c15-681572dbd5de-dns-svc\") pod \"dnsmasq-dns-55f844cf75-nz69z\" (UID: \"98af866c-3b91-4a5a-9c15-681572dbd5de\") " pod="openstack/dnsmasq-dns-55f844cf75-nz69z" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.792607 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98af866c-3b91-4a5a-9c15-681572dbd5de-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-nz69z\" (UID: \"98af866c-3b91-4a5a-9c15-681572dbd5de\") " pod="openstack/dnsmasq-dns-55f844cf75-nz69z" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.792637 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f84b369-07ee-4a29-8f3b-be71b0e37772-combined-ca-bundle\") pod \"neutron-78bb7f4c66-lspk6\" (UID: \"1f84b369-07ee-4a29-8f3b-be71b0e37772\") " pod="openstack/neutron-78bb7f4c66-lspk6" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.792684 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98af866c-3b91-4a5a-9c15-681572dbd5de-config\") pod \"dnsmasq-dns-55f844cf75-nz69z\" (UID: \"98af866c-3b91-4a5a-9c15-681572dbd5de\") " pod="openstack/dnsmasq-dns-55f844cf75-nz69z" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.793550 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98af866c-3b91-4a5a-9c15-681572dbd5de-config\") pod \"dnsmasq-dns-55f844cf75-nz69z\" (UID: \"98af866c-3b91-4a5a-9c15-681572dbd5de\") " pod="openstack/dnsmasq-dns-55f844cf75-nz69z" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.794123 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98af866c-3b91-4a5a-9c15-681572dbd5de-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-nz69z\" (UID: \"98af866c-3b91-4a5a-9c15-681572dbd5de\") " pod="openstack/dnsmasq-dns-55f844cf75-nz69z" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.794230 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98af866c-3b91-4a5a-9c15-681572dbd5de-dns-svc\") pod \"dnsmasq-dns-55f844cf75-nz69z\" (UID: \"98af866c-3b91-4a5a-9c15-681572dbd5de\") " pod="openstack/dnsmasq-dns-55f844cf75-nz69z" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.794948 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98af866c-3b91-4a5a-9c15-681572dbd5de-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-nz69z\" (UID: \"98af866c-3b91-4a5a-9c15-681572dbd5de\") " pod="openstack/dnsmasq-dns-55f844cf75-nz69z" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.797068 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98af866c-3b91-4a5a-9c15-681572dbd5de-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-nz69z\" (UID: \"98af866c-3b91-4a5a-9c15-681572dbd5de\") " pod="openstack/dnsmasq-dns-55f844cf75-nz69z" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.844054 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j226h\" (UniqueName: \"kubernetes.io/projected/98af866c-3b91-4a5a-9c15-681572dbd5de-kube-api-access-j226h\") pod \"dnsmasq-dns-55f844cf75-nz69z\" (UID: \"98af866c-3b91-4a5a-9c15-681572dbd5de\") " pod="openstack/dnsmasq-dns-55f844cf75-nz69z" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.893665 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f84b369-07ee-4a29-8f3b-be71b0e37772-ovndb-tls-certs\") pod \"neutron-78bb7f4c66-lspk6\" (UID: \"1f84b369-07ee-4a29-8f3b-be71b0e37772\") " pod="openstack/neutron-78bb7f4c66-lspk6" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.893784 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9kqk\" (UniqueName: \"kubernetes.io/projected/1f84b369-07ee-4a29-8f3b-be71b0e37772-kube-api-access-t9kqk\") pod \"neutron-78bb7f4c66-lspk6\" (UID: \"1f84b369-07ee-4a29-8f3b-be71b0e37772\") " pod="openstack/neutron-78bb7f4c66-lspk6" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.893804 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1f84b369-07ee-4a29-8f3b-be71b0e37772-config\") pod \"neutron-78bb7f4c66-lspk6\" (UID: \"1f84b369-07ee-4a29-8f3b-be71b0e37772\") " pod="openstack/neutron-78bb7f4c66-lspk6" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.893843 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1f84b369-07ee-4a29-8f3b-be71b0e37772-httpd-config\") pod \"neutron-78bb7f4c66-lspk6\" (UID: \"1f84b369-07ee-4a29-8f3b-be71b0e37772\") " pod="openstack/neutron-78bb7f4c66-lspk6" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.893863 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f84b369-07ee-4a29-8f3b-be71b0e37772-combined-ca-bundle\") pod \"neutron-78bb7f4c66-lspk6\" (UID: \"1f84b369-07ee-4a29-8f3b-be71b0e37772\") " pod="openstack/neutron-78bb7f4c66-lspk6" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.897827 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1f84b369-07ee-4a29-8f3b-be71b0e37772-httpd-config\") pod \"neutron-78bb7f4c66-lspk6\" (UID: \"1f84b369-07ee-4a29-8f3b-be71b0e37772\") " pod="openstack/neutron-78bb7f4c66-lspk6" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.897876 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1f84b369-07ee-4a29-8f3b-be71b0e37772-config\") pod \"neutron-78bb7f4c66-lspk6\" (UID: \"1f84b369-07ee-4a29-8f3b-be71b0e37772\") " pod="openstack/neutron-78bb7f4c66-lspk6" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.898510 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f84b369-07ee-4a29-8f3b-be71b0e37772-combined-ca-bundle\") pod \"neutron-78bb7f4c66-lspk6\" (UID: \"1f84b369-07ee-4a29-8f3b-be71b0e37772\") " pod="openstack/neutron-78bb7f4c66-lspk6" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.914480 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f84b369-07ee-4a29-8f3b-be71b0e37772-ovndb-tls-certs\") pod \"neutron-78bb7f4c66-lspk6\" (UID: \"1f84b369-07ee-4a29-8f3b-be71b0e37772\") " pod="openstack/neutron-78bb7f4c66-lspk6" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.925778 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9kqk\" (UniqueName: \"kubernetes.io/projected/1f84b369-07ee-4a29-8f3b-be71b0e37772-kube-api-access-t9kqk\") pod \"neutron-78bb7f4c66-lspk6\" (UID: \"1f84b369-07ee-4a29-8f3b-be71b0e37772\") " pod="openstack/neutron-78bb7f4c66-lspk6" Feb 02 17:31:40 crc kubenswrapper[4858]: I0202 17:31:40.977146 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-nz69z" Feb 02 17:31:41 crc kubenswrapper[4858]: I0202 17:31:41.033227 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-78bb7f4c66-lspk6" Feb 02 17:31:41 crc kubenswrapper[4858]: I0202 17:31:41.159124 4858 scope.go:117] "RemoveContainer" containerID="04212a4550e174d6fb9cad97613eef9e31790cb7d4eeb63fabf19f4ac0711f2d" Feb 02 17:31:41 crc kubenswrapper[4858]: E0202 17:31:41.178664 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 02 17:31:41 crc kubenswrapper[4858]: E0202 17:31:41.178849 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fjfgs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-6xt8q_openstack(d5a9fadc-338f-44bb-8ebd-bc4fe01972bf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 17:31:41 crc kubenswrapper[4858]: E0202 17:31:41.180221 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-6xt8q" podUID="d5a9fadc-338f-44bb-8ebd-bc4fe01972bf" Feb 02 17:31:41 crc kubenswrapper[4858]: I0202 17:31:41.643901 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-68f4b57796-rhdnw"] Feb 02 17:31:41 crc kubenswrapper[4858]: W0202 17:31:41.698526 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a208969_437b_449b_ba53_89364175a52a.slice/crio-e5f963bf92aa6161dd9678ba80a32f301106fab231962325e577c627803dfb9b WatchSource:0}: Error finding container e5f963bf92aa6161dd9678ba80a32f301106fab231962325e577c627803dfb9b: Status 404 returned error can't find the container with id e5f963bf92aa6161dd9678ba80a32f301106fab231962325e577c627803dfb9b Feb 02 17:31:41 crc kubenswrapper[4858]: I0202 17:31:41.724161 4858 scope.go:117] "RemoveContainer" containerID="1f18566c3943ba67d0f75f6297177d245130658c254368a67ee5bd06140ee0ae" Feb 02 17:31:41 crc kubenswrapper[4858]: I0202 17:31:41.807386 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68f4b57796-rhdnw" event={"ID":"4a208969-437b-449b-ba53-89364175a52a","Type":"ContainerStarted","Data":"e5f963bf92aa6161dd9678ba80a32f301106fab231962325e577c627803dfb9b"} Feb 02 17:31:41 crc kubenswrapper[4858]: I0202 17:31:41.843805 4858 scope.go:117] "RemoveContainer" containerID="b0ffbaff1e4b83a1016db892a70e29fd8a0f137a85dd400d47e8da89eb3e7abb" Feb 02 17:31:41 crc kubenswrapper[4858]: E0202 17:31:41.843854 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-6xt8q" podUID="d5a9fadc-338f-44bb-8ebd-bc4fe01972bf" Feb 02 17:31:41 crc kubenswrapper[4858]: I0202 17:31:41.898540 4858 scope.go:117] "RemoveContainer" containerID="48c8143009091c0dae03a566508692db0cc95e8fd7f7229c6b99910257524367" Feb 02 17:31:41 crc kubenswrapper[4858]: I0202 17:31:41.987898 4858 scope.go:117] "RemoveContainer" containerID="6f57b4fd25e9089fe610f3bba0b905d886920f36552692063ade2571971d5948" Feb 02 17:31:42 crc kubenswrapper[4858]: I0202 17:31:42.127788 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-857c87669d-c45h7"] Feb 02 17:31:42 crc kubenswrapper[4858]: W0202 17:31:42.134910 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24d5a090_abc7_4832_b6c6_2e36edf7d82e.slice/crio-1e1d5f39bde70462a2beb4ccf10aa9808ecd8ca0b7ba97f47016923341aa81a1 WatchSource:0}: Error finding container 1e1d5f39bde70462a2beb4ccf10aa9808ecd8ca0b7ba97f47016923341aa81a1: Status 404 returned error can't find the container with id 1e1d5f39bde70462a2beb4ccf10aa9808ecd8ca0b7ba97f47016923341aa81a1 Feb 02 17:31:42 crc kubenswrapper[4858]: I0202 17:31:42.417384 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-5d29t"] Feb 02 17:31:42 crc kubenswrapper[4858]: I0202 17:31:42.448855 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 17:31:42 crc kubenswrapper[4858]: I0202 17:31:42.481959 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-nz69z"] Feb 02 17:31:42 crc kubenswrapper[4858]: I0202 17:31:42.558896 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-78bb7f4c66-lspk6"] Feb 02 17:31:42 crc kubenswrapper[4858]: I0202 17:31:42.695045 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 17:31:42 crc kubenswrapper[4858]: I0202 17:31:42.832504 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758","Type":"ContainerStarted","Data":"d6a6c2dc6eef44b7f69730d02cb56c1ed0247c57ff77e6ebcfd7fb3f79479b1b"} Feb 02 17:31:42 crc kubenswrapper[4858]: I0202 17:31:42.835996 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-857c87669d-c45h7" event={"ID":"24d5a090-abc7-4832-b6c6-2e36edf7d82e","Type":"ContainerStarted","Data":"1e1d5f39bde70462a2beb4ccf10aa9808ecd8ca0b7ba97f47016923341aa81a1"} Feb 02 17:31:42 crc kubenswrapper[4858]: I0202 17:31:42.849348 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5d29t" event={"ID":"368436df-491c-4059-92f9-16993b192d76","Type":"ContainerStarted","Data":"6b334ecf4231d51ee8d95f3958353865b97e1a74fff41018a34850fd7f7761f7"} Feb 02 17:31:42 crc kubenswrapper[4858]: I0202 17:31:42.849396 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5d29t" event={"ID":"368436df-491c-4059-92f9-16993b192d76","Type":"ContainerStarted","Data":"07a6ddc98a5e0d8b44c34886d201093adcdc3f39dc7959a90b8b444b07f05c9f"} Feb 02 17:31:42 crc kubenswrapper[4858]: I0202 17:31:42.856629 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f4c2e0-6bac-4c5a-affd-48f2d8301111","Type":"ContainerStarted","Data":"9b71df5e3093339ed27494207b4e8407bc23ffb89e50cc0d982bb84e06fecfe1"} Feb 02 17:31:42 crc kubenswrapper[4858]: I0202 17:31:42.878370 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-nz69z" event={"ID":"98af866c-3b91-4a5a-9c15-681572dbd5de","Type":"ContainerStarted","Data":"6c349d35d215322cefc39d48346b96e3b80697ba6fb94044062b1bdf932ae4a1"} Feb 02 17:31:42 crc kubenswrapper[4858]: I0202 17:31:42.880549 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-mdq86" event={"ID":"56da7ca5-acf2-4372-9e48-20b829275727","Type":"ContainerStarted","Data":"2de5a1bd3a836ccf25ef75da4703f5354db68ef3c1d464e0a233a767be0e9bcf"} Feb 02 17:31:42 crc kubenswrapper[4858]: I0202 17:31:42.886872 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-5d29t" podStartSLOduration=9.886849886 podStartE2EDuration="9.886849886s" podCreationTimestamp="2026-02-02 17:31:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:31:42.876856591 +0000 UTC m=+1004.029271856" watchObservedRunningTime="2026-02-02 17:31:42.886849886 +0000 UTC m=+1004.039265161" Feb 02 17:31:42 crc kubenswrapper[4858]: I0202 17:31:42.894638 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-gfbpg" event={"ID":"e1355b1c-35d4-42d0-8780-7e01dd0b7a8d","Type":"ContainerStarted","Data":"6ba4e0eedf199bd7f5ace80d1079a59541293083f98002024bc39aed9ad830c8"} Feb 02 17:31:42 crc kubenswrapper[4858]: I0202 17:31:42.906192 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78bb7f4c66-lspk6" event={"ID":"1f84b369-07ee-4a29-8f3b-be71b0e37772","Type":"ContainerStarted","Data":"e31459d2368629f324d83381ad29e22c2805699ba187745818edb10864a32912"} Feb 02 17:31:42 crc kubenswrapper[4858]: I0202 17:31:42.908044 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"11d650f7-3342-41ec-b78a-0f9cbbac4368","Type":"ContainerStarted","Data":"00d14742213a18e37fd7b011e3c76d7b420102ae5c46ea9e6d24581be14647f0"} Feb 02 17:31:42 crc kubenswrapper[4858]: I0202 17:31:42.920455 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-mdq86" podStartSLOduration=5.889171639 podStartE2EDuration="32.920433046s" podCreationTimestamp="2026-02-02 17:31:10 +0000 UTC" firstStartedPulling="2026-02-02 17:31:14.09290905 +0000 UTC m=+975.245324315" lastFinishedPulling="2026-02-02 17:31:41.124170457 +0000 UTC m=+1002.276585722" observedRunningTime="2026-02-02 17:31:42.895794252 +0000 UTC m=+1004.048209517" watchObservedRunningTime="2026-02-02 17:31:42.920433046 +0000 UTC m=+1004.072848311" Feb 02 17:31:42 crc kubenswrapper[4858]: I0202 17:31:42.923508 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-gfbpg" podStartSLOduration=8.05236747 podStartE2EDuration="32.923496714s" podCreationTimestamp="2026-02-02 17:31:10 +0000 UTC" firstStartedPulling="2026-02-02 17:31:14.314366291 +0000 UTC m=+975.466781556" lastFinishedPulling="2026-02-02 17:31:39.185495525 +0000 UTC m=+1000.337910800" observedRunningTime="2026-02-02 17:31:42.920004364 +0000 UTC m=+1004.072419629" watchObservedRunningTime="2026-02-02 17:31:42.923496714 +0000 UTC m=+1004.075911979" Feb 02 17:31:43 crc kubenswrapper[4858]: I0202 17:31:43.243108 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-655b4bfd7-48p76"] Feb 02 17:31:43 crc kubenswrapper[4858]: I0202 17:31:43.245173 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-655b4bfd7-48p76" Feb 02 17:31:43 crc kubenswrapper[4858]: I0202 17:31:43.247509 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 02 17:31:43 crc kubenswrapper[4858]: I0202 17:31:43.259513 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 02 17:31:43 crc kubenswrapper[4858]: I0202 17:31:43.265103 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-655b4bfd7-48p76"] Feb 02 17:31:43 crc kubenswrapper[4858]: I0202 17:31:43.341739 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/957f5537-848b-45b5-9bc2-7dbffbad0fed-httpd-config\") pod \"neutron-655b4bfd7-48p76\" (UID: \"957f5537-848b-45b5-9bc2-7dbffbad0fed\") " pod="openstack/neutron-655b4bfd7-48p76" Feb 02 17:31:43 crc kubenswrapper[4858]: I0202 17:31:43.341793 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl274\" (UniqueName: \"kubernetes.io/projected/957f5537-848b-45b5-9bc2-7dbffbad0fed-kube-api-access-gl274\") pod \"neutron-655b4bfd7-48p76\" (UID: \"957f5537-848b-45b5-9bc2-7dbffbad0fed\") " pod="openstack/neutron-655b4bfd7-48p76" Feb 02 17:31:43 crc kubenswrapper[4858]: I0202 17:31:43.341859 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/957f5537-848b-45b5-9bc2-7dbffbad0fed-internal-tls-certs\") pod \"neutron-655b4bfd7-48p76\" (UID: \"957f5537-848b-45b5-9bc2-7dbffbad0fed\") " pod="openstack/neutron-655b4bfd7-48p76" Feb 02 17:31:43 crc kubenswrapper[4858]: I0202 17:31:43.341922 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/957f5537-848b-45b5-9bc2-7dbffbad0fed-ovndb-tls-certs\") pod \"neutron-655b4bfd7-48p76\" (UID: \"957f5537-848b-45b5-9bc2-7dbffbad0fed\") " pod="openstack/neutron-655b4bfd7-48p76" Feb 02 17:31:43 crc kubenswrapper[4858]: I0202 17:31:43.341959 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/957f5537-848b-45b5-9bc2-7dbffbad0fed-combined-ca-bundle\") pod \"neutron-655b4bfd7-48p76\" (UID: \"957f5537-848b-45b5-9bc2-7dbffbad0fed\") " pod="openstack/neutron-655b4bfd7-48p76" Feb 02 17:31:43 crc kubenswrapper[4858]: I0202 17:31:43.342038 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/957f5537-848b-45b5-9bc2-7dbffbad0fed-config\") pod \"neutron-655b4bfd7-48p76\" (UID: \"957f5537-848b-45b5-9bc2-7dbffbad0fed\") " pod="openstack/neutron-655b4bfd7-48p76" Feb 02 17:31:43 crc kubenswrapper[4858]: I0202 17:31:43.342140 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/957f5537-848b-45b5-9bc2-7dbffbad0fed-public-tls-certs\") pod \"neutron-655b4bfd7-48p76\" (UID: \"957f5537-848b-45b5-9bc2-7dbffbad0fed\") " pod="openstack/neutron-655b4bfd7-48p76" Feb 02 17:31:43 crc kubenswrapper[4858]: I0202 17:31:43.446654 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/957f5537-848b-45b5-9bc2-7dbffbad0fed-config\") pod \"neutron-655b4bfd7-48p76\" (UID: \"957f5537-848b-45b5-9bc2-7dbffbad0fed\") " pod="openstack/neutron-655b4bfd7-48p76" Feb 02 17:31:43 crc kubenswrapper[4858]: I0202 17:31:43.446730 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/957f5537-848b-45b5-9bc2-7dbffbad0fed-public-tls-certs\") pod \"neutron-655b4bfd7-48p76\" (UID: \"957f5537-848b-45b5-9bc2-7dbffbad0fed\") " pod="openstack/neutron-655b4bfd7-48p76" Feb 02 17:31:43 crc kubenswrapper[4858]: I0202 17:31:43.446779 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/957f5537-848b-45b5-9bc2-7dbffbad0fed-httpd-config\") pod \"neutron-655b4bfd7-48p76\" (UID: \"957f5537-848b-45b5-9bc2-7dbffbad0fed\") " pod="openstack/neutron-655b4bfd7-48p76" Feb 02 17:31:43 crc kubenswrapper[4858]: I0202 17:31:43.446804 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl274\" (UniqueName: \"kubernetes.io/projected/957f5537-848b-45b5-9bc2-7dbffbad0fed-kube-api-access-gl274\") pod \"neutron-655b4bfd7-48p76\" (UID: \"957f5537-848b-45b5-9bc2-7dbffbad0fed\") " pod="openstack/neutron-655b4bfd7-48p76" Feb 02 17:31:43 crc kubenswrapper[4858]: I0202 17:31:43.446837 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/957f5537-848b-45b5-9bc2-7dbffbad0fed-internal-tls-certs\") pod \"neutron-655b4bfd7-48p76\" (UID: \"957f5537-848b-45b5-9bc2-7dbffbad0fed\") " pod="openstack/neutron-655b4bfd7-48p76" Feb 02 17:31:43 crc kubenswrapper[4858]: I0202 17:31:43.446875 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/957f5537-848b-45b5-9bc2-7dbffbad0fed-ovndb-tls-certs\") pod \"neutron-655b4bfd7-48p76\" (UID: \"957f5537-848b-45b5-9bc2-7dbffbad0fed\") " pod="openstack/neutron-655b4bfd7-48p76" Feb 02 17:31:43 crc kubenswrapper[4858]: I0202 17:31:43.446902 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/957f5537-848b-45b5-9bc2-7dbffbad0fed-combined-ca-bundle\") pod \"neutron-655b4bfd7-48p76\" (UID: \"957f5537-848b-45b5-9bc2-7dbffbad0fed\") " pod="openstack/neutron-655b4bfd7-48p76" Feb 02 17:31:43 crc kubenswrapper[4858]: I0202 17:31:43.452230 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/957f5537-848b-45b5-9bc2-7dbffbad0fed-internal-tls-certs\") pod \"neutron-655b4bfd7-48p76\" (UID: \"957f5537-848b-45b5-9bc2-7dbffbad0fed\") " pod="openstack/neutron-655b4bfd7-48p76" Feb 02 17:31:43 crc kubenswrapper[4858]: I0202 17:31:43.455051 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/957f5537-848b-45b5-9bc2-7dbffbad0fed-combined-ca-bundle\") pod \"neutron-655b4bfd7-48p76\" (UID: \"957f5537-848b-45b5-9bc2-7dbffbad0fed\") " pod="openstack/neutron-655b4bfd7-48p76" Feb 02 17:31:43 crc kubenswrapper[4858]: I0202 17:31:43.462761 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/957f5537-848b-45b5-9bc2-7dbffbad0fed-httpd-config\") pod \"neutron-655b4bfd7-48p76\" (UID: \"957f5537-848b-45b5-9bc2-7dbffbad0fed\") " pod="openstack/neutron-655b4bfd7-48p76" Feb 02 17:31:43 crc kubenswrapper[4858]: I0202 17:31:43.463230 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/957f5537-848b-45b5-9bc2-7dbffbad0fed-public-tls-certs\") pod \"neutron-655b4bfd7-48p76\" (UID: \"957f5537-848b-45b5-9bc2-7dbffbad0fed\") " pod="openstack/neutron-655b4bfd7-48p76" Feb 02 17:31:43 crc kubenswrapper[4858]: I0202 17:31:43.464684 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/957f5537-848b-45b5-9bc2-7dbffbad0fed-config\") pod \"neutron-655b4bfd7-48p76\" (UID: \"957f5537-848b-45b5-9bc2-7dbffbad0fed\") " pod="openstack/neutron-655b4bfd7-48p76" Feb 02 17:31:43 crc kubenswrapper[4858]: I0202 17:31:43.466128 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/957f5537-848b-45b5-9bc2-7dbffbad0fed-ovndb-tls-certs\") pod \"neutron-655b4bfd7-48p76\" (UID: \"957f5537-848b-45b5-9bc2-7dbffbad0fed\") " pod="openstack/neutron-655b4bfd7-48p76" Feb 02 17:31:43 crc kubenswrapper[4858]: I0202 17:31:43.474455 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl274\" (UniqueName: \"kubernetes.io/projected/957f5537-848b-45b5-9bc2-7dbffbad0fed-kube-api-access-gl274\") pod \"neutron-655b4bfd7-48p76\" (UID: \"957f5537-848b-45b5-9bc2-7dbffbad0fed\") " pod="openstack/neutron-655b4bfd7-48p76" Feb 02 17:31:43 crc kubenswrapper[4858]: I0202 17:31:43.508428 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-txp59" podUID="1d9a40d6-2f6c-4c93-8191-d5dde87c136b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.134:5353: i/o timeout" Feb 02 17:31:43 crc kubenswrapper[4858]: I0202 17:31:43.708133 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-655b4bfd7-48p76" Feb 02 17:31:43 crc kubenswrapper[4858]: I0202 17:31:43.927764 4858 generic.go:334] "Generic (PLEG): container finished" podID="98af866c-3b91-4a5a-9c15-681572dbd5de" containerID="6c925973ee6e3ea76ddad42ea5bf6d98dcf5fe4cbd849cf7cb3dc4c05ccdd051" exitCode=0 Feb 02 17:31:43 crc kubenswrapper[4858]: I0202 17:31:43.928069 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-nz69z" event={"ID":"98af866c-3b91-4a5a-9c15-681572dbd5de","Type":"ContainerDied","Data":"6c925973ee6e3ea76ddad42ea5bf6d98dcf5fe4cbd849cf7cb3dc4c05ccdd051"} Feb 02 17:31:43 crc kubenswrapper[4858]: I0202 17:31:43.939050 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758","Type":"ContainerStarted","Data":"c0cf5ed62afd157997262f026987b02cb3dea0a4bdcd5c6b6535d7131d209119"} Feb 02 17:31:43 crc kubenswrapper[4858]: I0202 17:31:43.941320 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-857c87669d-c45h7" event={"ID":"24d5a090-abc7-4832-b6c6-2e36edf7d82e","Type":"ContainerStarted","Data":"4ea20cb217d595f212f7c30c0c9b8a9c83b72304dd0b30e106b284e161374882"} Feb 02 17:31:43 crc kubenswrapper[4858]: I0202 17:31:43.943563 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78bb7f4c66-lspk6" event={"ID":"1f84b369-07ee-4a29-8f3b-be71b0e37772","Type":"ContainerStarted","Data":"1a983ea7418ec1f2f1a01f2c087b7761d05e25515efe2b33c6827c1edfd8f1e8"} Feb 02 17:31:43 crc kubenswrapper[4858]: I0202 17:31:43.943595 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78bb7f4c66-lspk6" event={"ID":"1f84b369-07ee-4a29-8f3b-be71b0e37772","Type":"ContainerStarted","Data":"88d5a8458461dff54a5540e571394cffbc63159b3ace189e15c09d4ac2be2e59"} Feb 02 17:31:43 crc kubenswrapper[4858]: I0202 17:31:43.944652 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-78bb7f4c66-lspk6" Feb 02 17:31:43 crc kubenswrapper[4858]: I0202 17:31:43.954446 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68f4b57796-rhdnw" event={"ID":"4a208969-437b-449b-ba53-89364175a52a","Type":"ContainerStarted","Data":"c0a5d9e0cd966e56e1530aac2d6564ad6696c80a572bf2868041c426756e0d82"} Feb 02 17:31:43 crc kubenswrapper[4858]: I0202 17:31:43.975274 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-78bb7f4c66-lspk6" podStartSLOduration=3.975253141 podStartE2EDuration="3.975253141s" podCreationTimestamp="2026-02-02 17:31:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:31:43.970873396 +0000 UTC m=+1005.123288681" watchObservedRunningTime="2026-02-02 17:31:43.975253141 +0000 UTC m=+1005.127668406" Feb 02 17:31:44 crc kubenswrapper[4858]: I0202 17:31:44.454378 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-655b4bfd7-48p76"] Feb 02 17:31:44 crc kubenswrapper[4858]: I0202 17:31:44.972334 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-655b4bfd7-48p76" event={"ID":"957f5537-848b-45b5-9bc2-7dbffbad0fed","Type":"ContainerStarted","Data":"b951d087c3be3aee2a9cc1ae1bd4854c51b769ec98d267ad0660f6d51333d94b"} Feb 02 17:31:46 crc kubenswrapper[4858]: I0202 17:31:46.992920 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-857c87669d-c45h7" event={"ID":"24d5a090-abc7-4832-b6c6-2e36edf7d82e","Type":"ContainerStarted","Data":"51444d04afc916f2112443d8aa3f3ff3ae56b53e450edd0dc4f8f72fcd2a1a61"} Feb 02 17:31:46 crc kubenswrapper[4858]: I0202 17:31:46.996499 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"11d650f7-3342-41ec-b78a-0f9cbbac4368","Type":"ContainerStarted","Data":"5a302d9bb61d57cdd42f0f0fff939f598b4f41cae1a2c0e95be1becc2ffb5f5f"} Feb 02 17:31:46 crc kubenswrapper[4858]: I0202 17:31:46.998553 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-655b4bfd7-48p76" event={"ID":"957f5537-848b-45b5-9bc2-7dbffbad0fed","Type":"ContainerStarted","Data":"cd7e8ef4fb775371b61ba5efc5102f4c2da426998320120eda0d0b268a3d9f96"} Feb 02 17:31:47 crc kubenswrapper[4858]: I0202 17:31:47.000997 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68f4b57796-rhdnw" event={"ID":"4a208969-437b-449b-ba53-89364175a52a","Type":"ContainerStarted","Data":"667caf4e9e7b0c5e097fb0e4034b4fd2122ba0d05fc89a9d1e805d21cb14316b"} Feb 02 17:31:47 crc kubenswrapper[4858]: I0202 17:31:47.003558 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-nz69z" event={"ID":"98af866c-3b91-4a5a-9c15-681572dbd5de","Type":"ContainerStarted","Data":"1a019deb67b1efd2ff52c92b521a30d3f0f95f9506d6b3031aed77c71d1069c8"} Feb 02 17:31:47 crc kubenswrapper[4858]: I0202 17:31:47.003676 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-nz69z" Feb 02 17:31:47 crc kubenswrapper[4858]: I0202 17:31:47.017101 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-857c87669d-c45h7" podStartSLOduration=27.384882575 podStartE2EDuration="28.017085188s" podCreationTimestamp="2026-02-02 17:31:19 +0000 UTC" firstStartedPulling="2026-02-02 17:31:42.146882033 +0000 UTC m=+1003.299297298" lastFinishedPulling="2026-02-02 17:31:42.779084646 +0000 UTC m=+1003.931499911" observedRunningTime="2026-02-02 17:31:47.016375118 +0000 UTC m=+1008.168790393" watchObservedRunningTime="2026-02-02 17:31:47.017085188 +0000 UTC m=+1008.169500453" Feb 02 17:31:47 crc kubenswrapper[4858]: I0202 17:31:47.046850 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-68f4b57796-rhdnw" podStartSLOduration=27.008558378 podStartE2EDuration="28.046834679s" podCreationTimestamp="2026-02-02 17:31:19 +0000 UTC" firstStartedPulling="2026-02-02 17:31:41.705169346 +0000 UTC m=+1002.857584611" lastFinishedPulling="2026-02-02 17:31:42.743445647 +0000 UTC m=+1003.895860912" observedRunningTime="2026-02-02 17:31:47.039286853 +0000 UTC m=+1008.191702118" watchObservedRunningTime="2026-02-02 17:31:47.046834679 +0000 UTC m=+1008.199249944" Feb 02 17:31:47 crc kubenswrapper[4858]: I0202 17:31:47.061467 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-nz69z" podStartSLOduration=7.061448656 podStartE2EDuration="7.061448656s" podCreationTimestamp="2026-02-02 17:31:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:31:47.059737507 +0000 UTC m=+1008.212152772" watchObservedRunningTime="2026-02-02 17:31:47.061448656 +0000 UTC m=+1008.213863921" Feb 02 17:31:49 crc kubenswrapper[4858]: I0202 17:31:49.034154 4858 generic.go:334] "Generic (PLEG): container finished" podID="e1355b1c-35d4-42d0-8780-7e01dd0b7a8d" containerID="6ba4e0eedf199bd7f5ace80d1079a59541293083f98002024bc39aed9ad830c8" exitCode=0 Feb 02 17:31:49 crc kubenswrapper[4858]: I0202 17:31:49.034260 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-gfbpg" event={"ID":"e1355b1c-35d4-42d0-8780-7e01dd0b7a8d","Type":"ContainerDied","Data":"6ba4e0eedf199bd7f5ace80d1079a59541293083f98002024bc39aed9ad830c8"} Feb 02 17:31:49 crc kubenswrapper[4858]: I0202 17:31:49.039310 4858 generic.go:334] "Generic (PLEG): container finished" podID="368436df-491c-4059-92f9-16993b192d76" containerID="6b334ecf4231d51ee8d95f3958353865b97e1a74fff41018a34850fd7f7761f7" exitCode=0 Feb 02 17:31:49 crc kubenswrapper[4858]: I0202 17:31:49.039352 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5d29t" event={"ID":"368436df-491c-4059-92f9-16993b192d76","Type":"ContainerDied","Data":"6b334ecf4231d51ee8d95f3958353865b97e1a74fff41018a34850fd7f7761f7"} Feb 02 17:31:50 crc kubenswrapper[4858]: I0202 17:31:50.221712 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-857c87669d-c45h7" Feb 02 17:31:50 crc kubenswrapper[4858]: I0202 17:31:50.222108 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-857c87669d-c45h7" Feb 02 17:31:50 crc kubenswrapper[4858]: I0202 17:31:50.349299 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-68f4b57796-rhdnw" Feb 02 17:31:50 crc kubenswrapper[4858]: I0202 17:31:50.349354 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-68f4b57796-rhdnw" Feb 02 17:31:51 crc kubenswrapper[4858]: I0202 17:31:51.067084 4858 generic.go:334] "Generic (PLEG): container finished" podID="56da7ca5-acf2-4372-9e48-20b829275727" containerID="2de5a1bd3a836ccf25ef75da4703f5354db68ef3c1d464e0a233a767be0e9bcf" exitCode=0 Feb 02 17:31:51 crc kubenswrapper[4858]: I0202 17:31:51.067292 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-mdq86" event={"ID":"56da7ca5-acf2-4372-9e48-20b829275727","Type":"ContainerDied","Data":"2de5a1bd3a836ccf25ef75da4703f5354db68ef3c1d464e0a233a767be0e9bcf"} Feb 02 17:31:51 crc kubenswrapper[4858]: I0202 17:31:51.782582 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-gfbpg" Feb 02 17:31:51 crc kubenswrapper[4858]: I0202 17:31:51.821571 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xx6t\" (UniqueName: \"kubernetes.io/projected/e1355b1c-35d4-42d0-8780-7e01dd0b7a8d-kube-api-access-2xx6t\") pod \"e1355b1c-35d4-42d0-8780-7e01dd0b7a8d\" (UID: \"e1355b1c-35d4-42d0-8780-7e01dd0b7a8d\") " Feb 02 17:31:51 crc kubenswrapper[4858]: I0202 17:31:51.821890 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1355b1c-35d4-42d0-8780-7e01dd0b7a8d-scripts\") pod \"e1355b1c-35d4-42d0-8780-7e01dd0b7a8d\" (UID: \"e1355b1c-35d4-42d0-8780-7e01dd0b7a8d\") " Feb 02 17:31:51 crc kubenswrapper[4858]: I0202 17:31:51.821919 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1355b1c-35d4-42d0-8780-7e01dd0b7a8d-logs\") pod \"e1355b1c-35d4-42d0-8780-7e01dd0b7a8d\" (UID: \"e1355b1c-35d4-42d0-8780-7e01dd0b7a8d\") " Feb 02 17:31:51 crc kubenswrapper[4858]: I0202 17:31:51.822012 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1355b1c-35d4-42d0-8780-7e01dd0b7a8d-config-data\") pod \"e1355b1c-35d4-42d0-8780-7e01dd0b7a8d\" (UID: \"e1355b1c-35d4-42d0-8780-7e01dd0b7a8d\") " Feb 02 17:31:51 crc kubenswrapper[4858]: I0202 17:31:51.822046 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1355b1c-35d4-42d0-8780-7e01dd0b7a8d-combined-ca-bundle\") pod \"e1355b1c-35d4-42d0-8780-7e01dd0b7a8d\" (UID: \"e1355b1c-35d4-42d0-8780-7e01dd0b7a8d\") " Feb 02 17:31:51 crc kubenswrapper[4858]: I0202 17:31:51.823593 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1355b1c-35d4-42d0-8780-7e01dd0b7a8d-logs" (OuterVolumeSpecName: "logs") pod "e1355b1c-35d4-42d0-8780-7e01dd0b7a8d" (UID: "e1355b1c-35d4-42d0-8780-7e01dd0b7a8d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:31:51 crc kubenswrapper[4858]: I0202 17:31:51.830512 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1355b1c-35d4-42d0-8780-7e01dd0b7a8d-kube-api-access-2xx6t" (OuterVolumeSpecName: "kube-api-access-2xx6t") pod "e1355b1c-35d4-42d0-8780-7e01dd0b7a8d" (UID: "e1355b1c-35d4-42d0-8780-7e01dd0b7a8d"). InnerVolumeSpecName "kube-api-access-2xx6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:31:51 crc kubenswrapper[4858]: I0202 17:31:51.844333 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1355b1c-35d4-42d0-8780-7e01dd0b7a8d-scripts" (OuterVolumeSpecName: "scripts") pod "e1355b1c-35d4-42d0-8780-7e01dd0b7a8d" (UID: "e1355b1c-35d4-42d0-8780-7e01dd0b7a8d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:31:51 crc kubenswrapper[4858]: I0202 17:31:51.891574 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1355b1c-35d4-42d0-8780-7e01dd0b7a8d-config-data" (OuterVolumeSpecName: "config-data") pod "e1355b1c-35d4-42d0-8780-7e01dd0b7a8d" (UID: "e1355b1c-35d4-42d0-8780-7e01dd0b7a8d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:31:51 crc kubenswrapper[4858]: I0202 17:31:51.913087 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1355b1c-35d4-42d0-8780-7e01dd0b7a8d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e1355b1c-35d4-42d0-8780-7e01dd0b7a8d" (UID: "e1355b1c-35d4-42d0-8780-7e01dd0b7a8d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:31:51 crc kubenswrapper[4858]: I0202 17:31:51.926740 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1355b1c-35d4-42d0-8780-7e01dd0b7a8d-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:51 crc kubenswrapper[4858]: I0202 17:31:51.926769 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1355b1c-35d4-42d0-8780-7e01dd0b7a8d-logs\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:51 crc kubenswrapper[4858]: I0202 17:31:51.926782 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1355b1c-35d4-42d0-8780-7e01dd0b7a8d-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:51 crc kubenswrapper[4858]: I0202 17:31:51.926794 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1355b1c-35d4-42d0-8780-7e01dd0b7a8d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:51 crc kubenswrapper[4858]: I0202 17:31:51.926807 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xx6t\" (UniqueName: \"kubernetes.io/projected/e1355b1c-35d4-42d0-8780-7e01dd0b7a8d-kube-api-access-2xx6t\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:51 crc kubenswrapper[4858]: I0202 17:31:51.926935 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5d29t" Feb 02 17:31:52 crc kubenswrapper[4858]: I0202 17:31:52.028419 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/368436df-491c-4059-92f9-16993b192d76-fernet-keys\") pod \"368436df-491c-4059-92f9-16993b192d76\" (UID: \"368436df-491c-4059-92f9-16993b192d76\") " Feb 02 17:31:52 crc kubenswrapper[4858]: I0202 17:31:52.028481 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txq78\" (UniqueName: \"kubernetes.io/projected/368436df-491c-4059-92f9-16993b192d76-kube-api-access-txq78\") pod \"368436df-491c-4059-92f9-16993b192d76\" (UID: \"368436df-491c-4059-92f9-16993b192d76\") " Feb 02 17:31:52 crc kubenswrapper[4858]: I0202 17:31:52.028562 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/368436df-491c-4059-92f9-16993b192d76-combined-ca-bundle\") pod \"368436df-491c-4059-92f9-16993b192d76\" (UID: \"368436df-491c-4059-92f9-16993b192d76\") " Feb 02 17:31:52 crc kubenswrapper[4858]: I0202 17:31:52.028581 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/368436df-491c-4059-92f9-16993b192d76-scripts\") pod \"368436df-491c-4059-92f9-16993b192d76\" (UID: \"368436df-491c-4059-92f9-16993b192d76\") " Feb 02 17:31:52 crc kubenswrapper[4858]: I0202 17:31:52.028618 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/368436df-491c-4059-92f9-16993b192d76-config-data\") pod \"368436df-491c-4059-92f9-16993b192d76\" (UID: \"368436df-491c-4059-92f9-16993b192d76\") " Feb 02 17:31:52 crc kubenswrapper[4858]: I0202 17:31:52.028725 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/368436df-491c-4059-92f9-16993b192d76-credential-keys\") pod \"368436df-491c-4059-92f9-16993b192d76\" (UID: \"368436df-491c-4059-92f9-16993b192d76\") " Feb 02 17:31:52 crc kubenswrapper[4858]: I0202 17:31:52.037171 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/368436df-491c-4059-92f9-16993b192d76-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "368436df-491c-4059-92f9-16993b192d76" (UID: "368436df-491c-4059-92f9-16993b192d76"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:31:52 crc kubenswrapper[4858]: I0202 17:31:52.038207 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/368436df-491c-4059-92f9-16993b192d76-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "368436df-491c-4059-92f9-16993b192d76" (UID: "368436df-491c-4059-92f9-16993b192d76"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:31:52 crc kubenswrapper[4858]: I0202 17:31:52.045228 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/368436df-491c-4059-92f9-16993b192d76-scripts" (OuterVolumeSpecName: "scripts") pod "368436df-491c-4059-92f9-16993b192d76" (UID: "368436df-491c-4059-92f9-16993b192d76"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:31:52 crc kubenswrapper[4858]: I0202 17:31:52.053574 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/368436df-491c-4059-92f9-16993b192d76-kube-api-access-txq78" (OuterVolumeSpecName: "kube-api-access-txq78") pod "368436df-491c-4059-92f9-16993b192d76" (UID: "368436df-491c-4059-92f9-16993b192d76"). InnerVolumeSpecName "kube-api-access-txq78". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:31:52 crc kubenswrapper[4858]: I0202 17:31:52.097561 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/368436df-491c-4059-92f9-16993b192d76-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "368436df-491c-4059-92f9-16993b192d76" (UID: "368436df-491c-4059-92f9-16993b192d76"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:31:52 crc kubenswrapper[4858]: I0202 17:31:52.101155 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/368436df-491c-4059-92f9-16993b192d76-config-data" (OuterVolumeSpecName: "config-data") pod "368436df-491c-4059-92f9-16993b192d76" (UID: "368436df-491c-4059-92f9-16993b192d76"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:31:52 crc kubenswrapper[4858]: I0202 17:31:52.101437 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-655b4bfd7-48p76" event={"ID":"957f5537-848b-45b5-9bc2-7dbffbad0fed","Type":"ContainerStarted","Data":"58c48f2cdb1d647834064b7f623d4739cee1b51c67caeb895f426688b3d47862"} Feb 02 17:31:52 crc kubenswrapper[4858]: I0202 17:31:52.101907 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-655b4bfd7-48p76" Feb 02 17:31:52 crc kubenswrapper[4858]: I0202 17:31:52.124457 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f4c2e0-6bac-4c5a-affd-48f2d8301111","Type":"ContainerStarted","Data":"ef780359f3f63d18376f8ff91714675ffb9b0bfe8b28f2f8ebfce32c385ba9e2"} Feb 02 17:31:52 crc kubenswrapper[4858]: I0202 17:31:52.131641 4858 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/368436df-491c-4059-92f9-16993b192d76-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:52 crc kubenswrapper[4858]: I0202 17:31:52.131693 4858 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/368436df-491c-4059-92f9-16993b192d76-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:52 crc kubenswrapper[4858]: I0202 17:31:52.131703 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txq78\" (UniqueName: \"kubernetes.io/projected/368436df-491c-4059-92f9-16993b192d76-kube-api-access-txq78\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:52 crc kubenswrapper[4858]: I0202 17:31:52.131714 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/368436df-491c-4059-92f9-16993b192d76-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:52 crc kubenswrapper[4858]: I0202 17:31:52.131723 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/368436df-491c-4059-92f9-16993b192d76-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:52 crc kubenswrapper[4858]: I0202 17:31:52.131751 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/368436df-491c-4059-92f9-16993b192d76-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:52 crc kubenswrapper[4858]: I0202 17:31:52.139215 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-gfbpg" event={"ID":"e1355b1c-35d4-42d0-8780-7e01dd0b7a8d","Type":"ContainerDied","Data":"ee7fc42da061c4dfebfd296535e55b9f7a7721824af3981e27f36dd0d4251288"} Feb 02 17:31:52 crc kubenswrapper[4858]: I0202 17:31:52.139260 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee7fc42da061c4dfebfd296535e55b9f7a7721824af3981e27f36dd0d4251288" Feb 02 17:31:52 crc kubenswrapper[4858]: I0202 17:31:52.139327 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-gfbpg" Feb 02 17:31:52 crc kubenswrapper[4858]: I0202 17:31:52.142644 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5d29t" Feb 02 17:31:52 crc kubenswrapper[4858]: I0202 17:31:52.143465 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5d29t" event={"ID":"368436df-491c-4059-92f9-16993b192d76","Type":"ContainerDied","Data":"07a6ddc98a5e0d8b44c34886d201093adcdc3f39dc7959a90b8b444b07f05c9f"} Feb 02 17:31:52 crc kubenswrapper[4858]: I0202 17:31:52.143533 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07a6ddc98a5e0d8b44c34886d201093adcdc3f39dc7959a90b8b444b07f05c9f" Feb 02 17:31:52 crc kubenswrapper[4858]: I0202 17:31:52.437643 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-mdq86" Feb 02 17:31:52 crc kubenswrapper[4858]: I0202 17:31:52.457460 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-655b4bfd7-48p76" podStartSLOduration=9.457442113 podStartE2EDuration="9.457442113s" podCreationTimestamp="2026-02-02 17:31:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:31:52.123248459 +0000 UTC m=+1013.275663724" watchObservedRunningTime="2026-02-02 17:31:52.457442113 +0000 UTC m=+1013.609857378" Feb 02 17:31:52 crc kubenswrapper[4858]: I0202 17:31:52.539250 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56da7ca5-acf2-4372-9e48-20b829275727-combined-ca-bundle\") pod \"56da7ca5-acf2-4372-9e48-20b829275727\" (UID: \"56da7ca5-acf2-4372-9e48-20b829275727\") " Feb 02 17:31:52 crc kubenswrapper[4858]: I0202 17:31:52.539286 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqx4b\" (UniqueName: \"kubernetes.io/projected/56da7ca5-acf2-4372-9e48-20b829275727-kube-api-access-cqx4b\") pod \"56da7ca5-acf2-4372-9e48-20b829275727\" (UID: \"56da7ca5-acf2-4372-9e48-20b829275727\") " Feb 02 17:31:52 crc kubenswrapper[4858]: I0202 17:31:52.539359 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/56da7ca5-acf2-4372-9e48-20b829275727-db-sync-config-data\") pod \"56da7ca5-acf2-4372-9e48-20b829275727\" (UID: \"56da7ca5-acf2-4372-9e48-20b829275727\") " Feb 02 17:31:52 crc kubenswrapper[4858]: I0202 17:31:52.544693 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56da7ca5-acf2-4372-9e48-20b829275727-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "56da7ca5-acf2-4372-9e48-20b829275727" (UID: "56da7ca5-acf2-4372-9e48-20b829275727"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:31:52 crc kubenswrapper[4858]: I0202 17:31:52.545173 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56da7ca5-acf2-4372-9e48-20b829275727-kube-api-access-cqx4b" (OuterVolumeSpecName: "kube-api-access-cqx4b") pod "56da7ca5-acf2-4372-9e48-20b829275727" (UID: "56da7ca5-acf2-4372-9e48-20b829275727"). InnerVolumeSpecName "kube-api-access-cqx4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:31:52 crc kubenswrapper[4858]: I0202 17:31:52.568070 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56da7ca5-acf2-4372-9e48-20b829275727-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "56da7ca5-acf2-4372-9e48-20b829275727" (UID: "56da7ca5-acf2-4372-9e48-20b829275727"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:31:52 crc kubenswrapper[4858]: I0202 17:31:52.641906 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56da7ca5-acf2-4372-9e48-20b829275727-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:52 crc kubenswrapper[4858]: I0202 17:31:52.641952 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqx4b\" (UniqueName: \"kubernetes.io/projected/56da7ca5-acf2-4372-9e48-20b829275727-kube-api-access-cqx4b\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:52 crc kubenswrapper[4858]: I0202 17:31:52.642001 4858 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/56da7ca5-acf2-4372-9e48-20b829275727-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.017669 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5877769c8-jgqfs"] Feb 02 17:31:53 crc kubenswrapper[4858]: E0202 17:31:53.018034 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56da7ca5-acf2-4372-9e48-20b829275727" containerName="barbican-db-sync" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.018054 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="56da7ca5-acf2-4372-9e48-20b829275727" containerName="barbican-db-sync" Feb 02 17:31:53 crc kubenswrapper[4858]: E0202 17:31:53.018065 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="368436df-491c-4059-92f9-16993b192d76" containerName="keystone-bootstrap" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.018071 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="368436df-491c-4059-92f9-16993b192d76" containerName="keystone-bootstrap" Feb 02 17:31:53 crc kubenswrapper[4858]: E0202 17:31:53.018079 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1355b1c-35d4-42d0-8780-7e01dd0b7a8d" containerName="placement-db-sync" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.018086 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1355b1c-35d4-42d0-8780-7e01dd0b7a8d" containerName="placement-db-sync" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.018275 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1355b1c-35d4-42d0-8780-7e01dd0b7a8d" containerName="placement-db-sync" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.018291 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="368436df-491c-4059-92f9-16993b192d76" containerName="keystone-bootstrap" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.018313 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="56da7ca5-acf2-4372-9e48-20b829275727" containerName="barbican-db-sync" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.019130 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5877769c8-jgqfs" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.021705 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.021879 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.022117 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.022898 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-pt5l4" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.040323 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.047149 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5877769c8-jgqfs"] Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.097890 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6fb4977965-lqqjm"] Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.099082 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6fb4977965-lqqjm" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.104628 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.104829 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.108359 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7ztbs" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.108358 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.108448 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.108767 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.112507 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6fb4977965-lqqjm"] Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.153044 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqd5c\" (UniqueName: \"kubernetes.io/projected/72c71bde-c7f7-4e51-955d-e9a808664d2a-kube-api-access-zqd5c\") pod \"placement-5877769c8-jgqfs\" (UID: \"72c71bde-c7f7-4e51-955d-e9a808664d2a\") " pod="openstack/placement-5877769c8-jgqfs" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.153136 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c71bde-c7f7-4e51-955d-e9a808664d2a-combined-ca-bundle\") pod \"placement-5877769c8-jgqfs\" (UID: \"72c71bde-c7f7-4e51-955d-e9a808664d2a\") " pod="openstack/placement-5877769c8-jgqfs" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.153166 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd4nz\" (UniqueName: \"kubernetes.io/projected/e92af156-c3ae-4bdc-bf59-b07c51dbaef6-kube-api-access-qd4nz\") pod \"keystone-6fb4977965-lqqjm\" (UID: \"e92af156-c3ae-4bdc-bf59-b07c51dbaef6\") " pod="openstack/keystone-6fb4977965-lqqjm" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.153228 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e92af156-c3ae-4bdc-bf59-b07c51dbaef6-combined-ca-bundle\") pod \"keystone-6fb4977965-lqqjm\" (UID: \"e92af156-c3ae-4bdc-bf59-b07c51dbaef6\") " pod="openstack/keystone-6fb4977965-lqqjm" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.153270 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e92af156-c3ae-4bdc-bf59-b07c51dbaef6-fernet-keys\") pod \"keystone-6fb4977965-lqqjm\" (UID: \"e92af156-c3ae-4bdc-bf59-b07c51dbaef6\") " pod="openstack/keystone-6fb4977965-lqqjm" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.153339 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e92af156-c3ae-4bdc-bf59-b07c51dbaef6-public-tls-certs\") pod \"keystone-6fb4977965-lqqjm\" (UID: \"e92af156-c3ae-4bdc-bf59-b07c51dbaef6\") " pod="openstack/keystone-6fb4977965-lqqjm" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.153373 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72c71bde-c7f7-4e51-955d-e9a808664d2a-logs\") pod \"placement-5877769c8-jgqfs\" (UID: \"72c71bde-c7f7-4e51-955d-e9a808664d2a\") " pod="openstack/placement-5877769c8-jgqfs" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.153403 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72c71bde-c7f7-4e51-955d-e9a808664d2a-config-data\") pod \"placement-5877769c8-jgqfs\" (UID: \"72c71bde-c7f7-4e51-955d-e9a808664d2a\") " pod="openstack/placement-5877769c8-jgqfs" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.153472 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/72c71bde-c7f7-4e51-955d-e9a808664d2a-internal-tls-certs\") pod \"placement-5877769c8-jgqfs\" (UID: \"72c71bde-c7f7-4e51-955d-e9a808664d2a\") " pod="openstack/placement-5877769c8-jgqfs" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.153546 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e92af156-c3ae-4bdc-bf59-b07c51dbaef6-internal-tls-certs\") pod \"keystone-6fb4977965-lqqjm\" (UID: \"e92af156-c3ae-4bdc-bf59-b07c51dbaef6\") " pod="openstack/keystone-6fb4977965-lqqjm" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.153580 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e92af156-c3ae-4bdc-bf59-b07c51dbaef6-credential-keys\") pod \"keystone-6fb4977965-lqqjm\" (UID: \"e92af156-c3ae-4bdc-bf59-b07c51dbaef6\") " pod="openstack/keystone-6fb4977965-lqqjm" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.153638 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/72c71bde-c7f7-4e51-955d-e9a808664d2a-public-tls-certs\") pod \"placement-5877769c8-jgqfs\" (UID: \"72c71bde-c7f7-4e51-955d-e9a808664d2a\") " pod="openstack/placement-5877769c8-jgqfs" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.153670 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e92af156-c3ae-4bdc-bf59-b07c51dbaef6-config-data\") pod \"keystone-6fb4977965-lqqjm\" (UID: \"e92af156-c3ae-4bdc-bf59-b07c51dbaef6\") " pod="openstack/keystone-6fb4977965-lqqjm" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.153735 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72c71bde-c7f7-4e51-955d-e9a808664d2a-scripts\") pod \"placement-5877769c8-jgqfs\" (UID: \"72c71bde-c7f7-4e51-955d-e9a808664d2a\") " pod="openstack/placement-5877769c8-jgqfs" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.154067 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e92af156-c3ae-4bdc-bf59-b07c51dbaef6-scripts\") pod \"keystone-6fb4977965-lqqjm\" (UID: \"e92af156-c3ae-4bdc-bf59-b07c51dbaef6\") " pod="openstack/keystone-6fb4977965-lqqjm" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.157369 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758","Type":"ContainerStarted","Data":"52c97d980459381885891c9291b70b89a8bb9f8043655313073d4515c9fd8dc4"} Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.159234 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"11d650f7-3342-41ec-b78a-0f9cbbac4368","Type":"ContainerStarted","Data":"736d47da60f9d74e9eca323110304c22ad8f40b65dcf1136e9c4dae5183c7c3a"} Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.163226 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-mdq86" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.163310 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-mdq86" event={"ID":"56da7ca5-acf2-4372-9e48-20b829275727","Type":"ContainerDied","Data":"48a4f045594174cfed4981c851385e3d22d1a3d986c3140e3a272ff84cf3492e"} Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.163365 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48a4f045594174cfed4981c851385e3d22d1a3d986c3140e3a272ff84cf3492e" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.218551 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=27.218527619 podStartE2EDuration="27.218527619s" podCreationTimestamp="2026-02-02 17:31:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:31:53.210316704 +0000 UTC m=+1014.362731989" watchObservedRunningTime="2026-02-02 17:31:53.218527619 +0000 UTC m=+1014.370942904" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.241905 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=14.241888287 podStartE2EDuration="14.241888287s" podCreationTimestamp="2026-02-02 17:31:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:31:53.240016043 +0000 UTC m=+1014.392431308" watchObservedRunningTime="2026-02-02 17:31:53.241888287 +0000 UTC m=+1014.394303552" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.255306 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e92af156-c3ae-4bdc-bf59-b07c51dbaef6-scripts\") pod \"keystone-6fb4977965-lqqjm\" (UID: \"e92af156-c3ae-4bdc-bf59-b07c51dbaef6\") " pod="openstack/keystone-6fb4977965-lqqjm" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.255395 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqd5c\" (UniqueName: \"kubernetes.io/projected/72c71bde-c7f7-4e51-955d-e9a808664d2a-kube-api-access-zqd5c\") pod \"placement-5877769c8-jgqfs\" (UID: \"72c71bde-c7f7-4e51-955d-e9a808664d2a\") " pod="openstack/placement-5877769c8-jgqfs" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.255440 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c71bde-c7f7-4e51-955d-e9a808664d2a-combined-ca-bundle\") pod \"placement-5877769c8-jgqfs\" (UID: \"72c71bde-c7f7-4e51-955d-e9a808664d2a\") " pod="openstack/placement-5877769c8-jgqfs" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.255455 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd4nz\" (UniqueName: \"kubernetes.io/projected/e92af156-c3ae-4bdc-bf59-b07c51dbaef6-kube-api-access-qd4nz\") pod \"keystone-6fb4977965-lqqjm\" (UID: \"e92af156-c3ae-4bdc-bf59-b07c51dbaef6\") " pod="openstack/keystone-6fb4977965-lqqjm" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.255516 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e92af156-c3ae-4bdc-bf59-b07c51dbaef6-combined-ca-bundle\") pod \"keystone-6fb4977965-lqqjm\" (UID: \"e92af156-c3ae-4bdc-bf59-b07c51dbaef6\") " pod="openstack/keystone-6fb4977965-lqqjm" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.255559 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e92af156-c3ae-4bdc-bf59-b07c51dbaef6-fernet-keys\") pod \"keystone-6fb4977965-lqqjm\" (UID: \"e92af156-c3ae-4bdc-bf59-b07c51dbaef6\") " pod="openstack/keystone-6fb4977965-lqqjm" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.255636 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e92af156-c3ae-4bdc-bf59-b07c51dbaef6-public-tls-certs\") pod \"keystone-6fb4977965-lqqjm\" (UID: \"e92af156-c3ae-4bdc-bf59-b07c51dbaef6\") " pod="openstack/keystone-6fb4977965-lqqjm" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.255659 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72c71bde-c7f7-4e51-955d-e9a808664d2a-logs\") pod \"placement-5877769c8-jgqfs\" (UID: \"72c71bde-c7f7-4e51-955d-e9a808664d2a\") " pod="openstack/placement-5877769c8-jgqfs" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.255690 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72c71bde-c7f7-4e51-955d-e9a808664d2a-config-data\") pod \"placement-5877769c8-jgqfs\" (UID: \"72c71bde-c7f7-4e51-955d-e9a808664d2a\") " pod="openstack/placement-5877769c8-jgqfs" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.255732 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/72c71bde-c7f7-4e51-955d-e9a808664d2a-internal-tls-certs\") pod \"placement-5877769c8-jgqfs\" (UID: \"72c71bde-c7f7-4e51-955d-e9a808664d2a\") " pod="openstack/placement-5877769c8-jgqfs" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.255791 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e92af156-c3ae-4bdc-bf59-b07c51dbaef6-internal-tls-certs\") pod \"keystone-6fb4977965-lqqjm\" (UID: \"e92af156-c3ae-4bdc-bf59-b07c51dbaef6\") " pod="openstack/keystone-6fb4977965-lqqjm" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.255809 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e92af156-c3ae-4bdc-bf59-b07c51dbaef6-credential-keys\") pod \"keystone-6fb4977965-lqqjm\" (UID: \"e92af156-c3ae-4bdc-bf59-b07c51dbaef6\") " pod="openstack/keystone-6fb4977965-lqqjm" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.255832 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/72c71bde-c7f7-4e51-955d-e9a808664d2a-public-tls-certs\") pod \"placement-5877769c8-jgqfs\" (UID: \"72c71bde-c7f7-4e51-955d-e9a808664d2a\") " pod="openstack/placement-5877769c8-jgqfs" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.255855 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e92af156-c3ae-4bdc-bf59-b07c51dbaef6-config-data\") pod \"keystone-6fb4977965-lqqjm\" (UID: \"e92af156-c3ae-4bdc-bf59-b07c51dbaef6\") " pod="openstack/keystone-6fb4977965-lqqjm" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.255870 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72c71bde-c7f7-4e51-955d-e9a808664d2a-scripts\") pod \"placement-5877769c8-jgqfs\" (UID: \"72c71bde-c7f7-4e51-955d-e9a808664d2a\") " pod="openstack/placement-5877769c8-jgqfs" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.260349 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72c71bde-c7f7-4e51-955d-e9a808664d2a-config-data\") pod \"placement-5877769c8-jgqfs\" (UID: \"72c71bde-c7f7-4e51-955d-e9a808664d2a\") " pod="openstack/placement-5877769c8-jgqfs" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.261258 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e92af156-c3ae-4bdc-bf59-b07c51dbaef6-combined-ca-bundle\") pod \"keystone-6fb4977965-lqqjm\" (UID: \"e92af156-c3ae-4bdc-bf59-b07c51dbaef6\") " pod="openstack/keystone-6fb4977965-lqqjm" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.262361 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72c71bde-c7f7-4e51-955d-e9a808664d2a-logs\") pod \"placement-5877769c8-jgqfs\" (UID: \"72c71bde-c7f7-4e51-955d-e9a808664d2a\") " pod="openstack/placement-5877769c8-jgqfs" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.265872 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e92af156-c3ae-4bdc-bf59-b07c51dbaef6-internal-tls-certs\") pod \"keystone-6fb4977965-lqqjm\" (UID: \"e92af156-c3ae-4bdc-bf59-b07c51dbaef6\") " pod="openstack/keystone-6fb4977965-lqqjm" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.269336 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e92af156-c3ae-4bdc-bf59-b07c51dbaef6-scripts\") pod \"keystone-6fb4977965-lqqjm\" (UID: \"e92af156-c3ae-4bdc-bf59-b07c51dbaef6\") " pod="openstack/keystone-6fb4977965-lqqjm" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.270421 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e92af156-c3ae-4bdc-bf59-b07c51dbaef6-public-tls-certs\") pod \"keystone-6fb4977965-lqqjm\" (UID: \"e92af156-c3ae-4bdc-bf59-b07c51dbaef6\") " pod="openstack/keystone-6fb4977965-lqqjm" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.270875 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c71bde-c7f7-4e51-955d-e9a808664d2a-combined-ca-bundle\") pod \"placement-5877769c8-jgqfs\" (UID: \"72c71bde-c7f7-4e51-955d-e9a808664d2a\") " pod="openstack/placement-5877769c8-jgqfs" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.273916 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/72c71bde-c7f7-4e51-955d-e9a808664d2a-public-tls-certs\") pod \"placement-5877769c8-jgqfs\" (UID: \"72c71bde-c7f7-4e51-955d-e9a808664d2a\") " pod="openstack/placement-5877769c8-jgqfs" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.274563 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72c71bde-c7f7-4e51-955d-e9a808664d2a-scripts\") pod \"placement-5877769c8-jgqfs\" (UID: \"72c71bde-c7f7-4e51-955d-e9a808664d2a\") " pod="openstack/placement-5877769c8-jgqfs" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.277590 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e92af156-c3ae-4bdc-bf59-b07c51dbaef6-config-data\") pod \"keystone-6fb4977965-lqqjm\" (UID: \"e92af156-c3ae-4bdc-bf59-b07c51dbaef6\") " pod="openstack/keystone-6fb4977965-lqqjm" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.280150 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd4nz\" (UniqueName: \"kubernetes.io/projected/e92af156-c3ae-4bdc-bf59-b07c51dbaef6-kube-api-access-qd4nz\") pod \"keystone-6fb4977965-lqqjm\" (UID: \"e92af156-c3ae-4bdc-bf59-b07c51dbaef6\") " pod="openstack/keystone-6fb4977965-lqqjm" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.281316 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqd5c\" (UniqueName: \"kubernetes.io/projected/72c71bde-c7f7-4e51-955d-e9a808664d2a-kube-api-access-zqd5c\") pod \"placement-5877769c8-jgqfs\" (UID: \"72c71bde-c7f7-4e51-955d-e9a808664d2a\") " pod="openstack/placement-5877769c8-jgqfs" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.281890 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e92af156-c3ae-4bdc-bf59-b07c51dbaef6-credential-keys\") pod \"keystone-6fb4977965-lqqjm\" (UID: \"e92af156-c3ae-4bdc-bf59-b07c51dbaef6\") " pod="openstack/keystone-6fb4977965-lqqjm" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.282112 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e92af156-c3ae-4bdc-bf59-b07c51dbaef6-fernet-keys\") pod \"keystone-6fb4977965-lqqjm\" (UID: \"e92af156-c3ae-4bdc-bf59-b07c51dbaef6\") " pod="openstack/keystone-6fb4977965-lqqjm" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.283250 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/72c71bde-c7f7-4e51-955d-e9a808664d2a-internal-tls-certs\") pod \"placement-5877769c8-jgqfs\" (UID: \"72c71bde-c7f7-4e51-955d-e9a808664d2a\") " pod="openstack/placement-5877769c8-jgqfs" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.336452 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5877769c8-jgqfs" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.414037 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-786777f949-6vglz"] Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.415454 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-786777f949-6vglz" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.419016 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6fb4977965-lqqjm" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.424731 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-5h5d7" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.424966 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.425118 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.435240 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-786777f949-6vglz"] Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.459458 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f98781ea-94a7-4f21-975e-4a8d48fad122-combined-ca-bundle\") pod \"barbican-worker-786777f949-6vglz\" (UID: \"f98781ea-94a7-4f21-975e-4a8d48fad122\") " pod="openstack/barbican-worker-786777f949-6vglz" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.459666 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f98781ea-94a7-4f21-975e-4a8d48fad122-logs\") pod \"barbican-worker-786777f949-6vglz\" (UID: \"f98781ea-94a7-4f21-975e-4a8d48fad122\") " pod="openstack/barbican-worker-786777f949-6vglz" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.459785 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f98781ea-94a7-4f21-975e-4a8d48fad122-config-data-custom\") pod \"barbican-worker-786777f949-6vglz\" (UID: \"f98781ea-94a7-4f21-975e-4a8d48fad122\") " pod="openstack/barbican-worker-786777f949-6vglz" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.459885 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8q96\" (UniqueName: \"kubernetes.io/projected/f98781ea-94a7-4f21-975e-4a8d48fad122-kube-api-access-p8q96\") pod \"barbican-worker-786777f949-6vglz\" (UID: \"f98781ea-94a7-4f21-975e-4a8d48fad122\") " pod="openstack/barbican-worker-786777f949-6vglz" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.460028 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f98781ea-94a7-4f21-975e-4a8d48fad122-config-data\") pod \"barbican-worker-786777f949-6vglz\" (UID: \"f98781ea-94a7-4f21-975e-4a8d48fad122\") " pod="openstack/barbican-worker-786777f949-6vglz" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.462143 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5966976bc8-gg2wx"] Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.463639 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5966976bc8-gg2wx" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.466867 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.495041 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5966976bc8-gg2wx"] Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.581654 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f82a4143-d161-4bab-ba3a-bc426cfe53b9-combined-ca-bundle\") pod \"barbican-keystone-listener-5966976bc8-gg2wx\" (UID: \"f82a4143-d161-4bab-ba3a-bc426cfe53b9\") " pod="openstack/barbican-keystone-listener-5966976bc8-gg2wx" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.581722 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f98781ea-94a7-4f21-975e-4a8d48fad122-combined-ca-bundle\") pod \"barbican-worker-786777f949-6vglz\" (UID: \"f98781ea-94a7-4f21-975e-4a8d48fad122\") " pod="openstack/barbican-worker-786777f949-6vglz" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.583502 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f98781ea-94a7-4f21-975e-4a8d48fad122-logs\") pod \"barbican-worker-786777f949-6vglz\" (UID: \"f98781ea-94a7-4f21-975e-4a8d48fad122\") " pod="openstack/barbican-worker-786777f949-6vglz" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.583571 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f98781ea-94a7-4f21-975e-4a8d48fad122-config-data-custom\") pod \"barbican-worker-786777f949-6vglz\" (UID: \"f98781ea-94a7-4f21-975e-4a8d48fad122\") " pod="openstack/barbican-worker-786777f949-6vglz" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.583660 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8q96\" (UniqueName: \"kubernetes.io/projected/f98781ea-94a7-4f21-975e-4a8d48fad122-kube-api-access-p8q96\") pod \"barbican-worker-786777f949-6vglz\" (UID: \"f98781ea-94a7-4f21-975e-4a8d48fad122\") " pod="openstack/barbican-worker-786777f949-6vglz" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.583727 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f82a4143-d161-4bab-ba3a-bc426cfe53b9-config-data-custom\") pod \"barbican-keystone-listener-5966976bc8-gg2wx\" (UID: \"f82a4143-d161-4bab-ba3a-bc426cfe53b9\") " pod="openstack/barbican-keystone-listener-5966976bc8-gg2wx" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.583751 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcb2z\" (UniqueName: \"kubernetes.io/projected/f82a4143-d161-4bab-ba3a-bc426cfe53b9-kube-api-access-bcb2z\") pod \"barbican-keystone-listener-5966976bc8-gg2wx\" (UID: \"f82a4143-d161-4bab-ba3a-bc426cfe53b9\") " pod="openstack/barbican-keystone-listener-5966976bc8-gg2wx" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.583785 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f98781ea-94a7-4f21-975e-4a8d48fad122-config-data\") pod \"barbican-worker-786777f949-6vglz\" (UID: \"f98781ea-94a7-4f21-975e-4a8d48fad122\") " pod="openstack/barbican-worker-786777f949-6vglz" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.583828 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f82a4143-d161-4bab-ba3a-bc426cfe53b9-logs\") pod \"barbican-keystone-listener-5966976bc8-gg2wx\" (UID: \"f82a4143-d161-4bab-ba3a-bc426cfe53b9\") " pod="openstack/barbican-keystone-listener-5966976bc8-gg2wx" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.583836 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f98781ea-94a7-4f21-975e-4a8d48fad122-logs\") pod \"barbican-worker-786777f949-6vglz\" (UID: \"f98781ea-94a7-4f21-975e-4a8d48fad122\") " pod="openstack/barbican-worker-786777f949-6vglz" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.583944 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f82a4143-d161-4bab-ba3a-bc426cfe53b9-config-data\") pod \"barbican-keystone-listener-5966976bc8-gg2wx\" (UID: \"f82a4143-d161-4bab-ba3a-bc426cfe53b9\") " pod="openstack/barbican-keystone-listener-5966976bc8-gg2wx" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.603669 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f98781ea-94a7-4f21-975e-4a8d48fad122-combined-ca-bundle\") pod \"barbican-worker-786777f949-6vglz\" (UID: \"f98781ea-94a7-4f21-975e-4a8d48fad122\") " pod="openstack/barbican-worker-786777f949-6vglz" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.604187 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f98781ea-94a7-4f21-975e-4a8d48fad122-config-data-custom\") pod \"barbican-worker-786777f949-6vglz\" (UID: \"f98781ea-94a7-4f21-975e-4a8d48fad122\") " pod="openstack/barbican-worker-786777f949-6vglz" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.612297 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f98781ea-94a7-4f21-975e-4a8d48fad122-config-data\") pod \"barbican-worker-786777f949-6vglz\" (UID: \"f98781ea-94a7-4f21-975e-4a8d48fad122\") " pod="openstack/barbican-worker-786777f949-6vglz" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.622865 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8q96\" (UniqueName: \"kubernetes.io/projected/f98781ea-94a7-4f21-975e-4a8d48fad122-kube-api-access-p8q96\") pod \"barbican-worker-786777f949-6vglz\" (UID: \"f98781ea-94a7-4f21-975e-4a8d48fad122\") " pod="openstack/barbican-worker-786777f949-6vglz" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.629309 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-749df8c57d-rd7dc"] Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.630798 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-749df8c57d-rd7dc" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.651834 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-8b9496955-6bsmq"] Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.668426 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-8b9496955-6bsmq" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.671450 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-749df8c57d-rd7dc"] Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.681625 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-8b9496955-6bsmq"] Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.705705 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08f67234-d648-4127-98d7-fcf00df7e1d3-logs\") pod \"barbican-keystone-listener-749df8c57d-rd7dc\" (UID: \"08f67234-d648-4127-98d7-fcf00df7e1d3\") " pod="openstack/barbican-keystone-listener-749df8c57d-rd7dc" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.706053 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f82a4143-d161-4bab-ba3a-bc426cfe53b9-config-data-custom\") pod \"barbican-keystone-listener-5966976bc8-gg2wx\" (UID: \"f82a4143-d161-4bab-ba3a-bc426cfe53b9\") " pod="openstack/barbican-keystone-listener-5966976bc8-gg2wx" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.706080 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcb2z\" (UniqueName: \"kubernetes.io/projected/f82a4143-d161-4bab-ba3a-bc426cfe53b9-kube-api-access-bcb2z\") pod \"barbican-keystone-listener-5966976bc8-gg2wx\" (UID: \"f82a4143-d161-4bab-ba3a-bc426cfe53b9\") " pod="openstack/barbican-keystone-listener-5966976bc8-gg2wx" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.706344 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f82a4143-d161-4bab-ba3a-bc426cfe53b9-logs\") pod \"barbican-keystone-listener-5966976bc8-gg2wx\" (UID: \"f82a4143-d161-4bab-ba3a-bc426cfe53b9\") " pod="openstack/barbican-keystone-listener-5966976bc8-gg2wx" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.706400 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08f67234-d648-4127-98d7-fcf00df7e1d3-combined-ca-bundle\") pod \"barbican-keystone-listener-749df8c57d-rd7dc\" (UID: \"08f67234-d648-4127-98d7-fcf00df7e1d3\") " pod="openstack/barbican-keystone-listener-749df8c57d-rd7dc" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.706423 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f82a4143-d161-4bab-ba3a-bc426cfe53b9-config-data\") pod \"barbican-keystone-listener-5966976bc8-gg2wx\" (UID: \"f82a4143-d161-4bab-ba3a-bc426cfe53b9\") " pod="openstack/barbican-keystone-listener-5966976bc8-gg2wx" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.706478 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08f67234-d648-4127-98d7-fcf00df7e1d3-config-data\") pod \"barbican-keystone-listener-749df8c57d-rd7dc\" (UID: \"08f67234-d648-4127-98d7-fcf00df7e1d3\") " pod="openstack/barbican-keystone-listener-749df8c57d-rd7dc" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.706576 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f82a4143-d161-4bab-ba3a-bc426cfe53b9-combined-ca-bundle\") pod \"barbican-keystone-listener-5966976bc8-gg2wx\" (UID: \"f82a4143-d161-4bab-ba3a-bc426cfe53b9\") " pod="openstack/barbican-keystone-listener-5966976bc8-gg2wx" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.706629 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08f67234-d648-4127-98d7-fcf00df7e1d3-config-data-custom\") pod \"barbican-keystone-listener-749df8c57d-rd7dc\" (UID: \"08f67234-d648-4127-98d7-fcf00df7e1d3\") " pod="openstack/barbican-keystone-listener-749df8c57d-rd7dc" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.706701 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pc52\" (UniqueName: \"kubernetes.io/projected/08f67234-d648-4127-98d7-fcf00df7e1d3-kube-api-access-9pc52\") pod \"barbican-keystone-listener-749df8c57d-rd7dc\" (UID: \"08f67234-d648-4127-98d7-fcf00df7e1d3\") " pod="openstack/barbican-keystone-listener-749df8c57d-rd7dc" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.713722 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f82a4143-d161-4bab-ba3a-bc426cfe53b9-logs\") pod \"barbican-keystone-listener-5966976bc8-gg2wx\" (UID: \"f82a4143-d161-4bab-ba3a-bc426cfe53b9\") " pod="openstack/barbican-keystone-listener-5966976bc8-gg2wx" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.723758 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f82a4143-d161-4bab-ba3a-bc426cfe53b9-config-data-custom\") pod \"barbican-keystone-listener-5966976bc8-gg2wx\" (UID: \"f82a4143-d161-4bab-ba3a-bc426cfe53b9\") " pod="openstack/barbican-keystone-listener-5966976bc8-gg2wx" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.725174 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-nz69z"] Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.725443 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-nz69z" podUID="98af866c-3b91-4a5a-9c15-681572dbd5de" containerName="dnsmasq-dns" containerID="cri-o://1a019deb67b1efd2ff52c92b521a30d3f0f95f9506d6b3031aed77c71d1069c8" gracePeriod=10 Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.729917 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f82a4143-d161-4bab-ba3a-bc426cfe53b9-config-data\") pod \"barbican-keystone-listener-5966976bc8-gg2wx\" (UID: \"f82a4143-d161-4bab-ba3a-bc426cfe53b9\") " pod="openstack/barbican-keystone-listener-5966976bc8-gg2wx" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.735876 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcb2z\" (UniqueName: \"kubernetes.io/projected/f82a4143-d161-4bab-ba3a-bc426cfe53b9-kube-api-access-bcb2z\") pod \"barbican-keystone-listener-5966976bc8-gg2wx\" (UID: \"f82a4143-d161-4bab-ba3a-bc426cfe53b9\") " pod="openstack/barbican-keystone-listener-5966976bc8-gg2wx" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.739271 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f82a4143-d161-4bab-ba3a-bc426cfe53b9-combined-ca-bundle\") pod \"barbican-keystone-listener-5966976bc8-gg2wx\" (UID: \"f82a4143-d161-4bab-ba3a-bc426cfe53b9\") " pod="openstack/barbican-keystone-listener-5966976bc8-gg2wx" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.739521 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-nz69z" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.768672 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-zst5s"] Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.770824 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-zst5s" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.806643 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-786777f949-6vglz" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.808240 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cd8cddc-99bb-4e60-85e5-07d6090cfd49-logs\") pod \"barbican-worker-8b9496955-6bsmq\" (UID: \"9cd8cddc-99bb-4e60-85e5-07d6090cfd49\") " pod="openstack/barbican-worker-8b9496955-6bsmq" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.808296 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08f67234-d648-4127-98d7-fcf00df7e1d3-config-data-custom\") pod \"barbican-keystone-listener-749df8c57d-rd7dc\" (UID: \"08f67234-d648-4127-98d7-fcf00df7e1d3\") " pod="openstack/barbican-keystone-listener-749df8c57d-rd7dc" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.808361 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pc52\" (UniqueName: \"kubernetes.io/projected/08f67234-d648-4127-98d7-fcf00df7e1d3-kube-api-access-9pc52\") pod \"barbican-keystone-listener-749df8c57d-rd7dc\" (UID: \"08f67234-d648-4127-98d7-fcf00df7e1d3\") " pod="openstack/barbican-keystone-listener-749df8c57d-rd7dc" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.808413 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08f67234-d648-4127-98d7-fcf00df7e1d3-logs\") pod \"barbican-keystone-listener-749df8c57d-rd7dc\" (UID: \"08f67234-d648-4127-98d7-fcf00df7e1d3\") " pod="openstack/barbican-keystone-listener-749df8c57d-rd7dc" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.808442 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5k9v\" (UniqueName: \"kubernetes.io/projected/9cd8cddc-99bb-4e60-85e5-07d6090cfd49-kube-api-access-n5k9v\") pod \"barbican-worker-8b9496955-6bsmq\" (UID: \"9cd8cddc-99bb-4e60-85e5-07d6090cfd49\") " pod="openstack/barbican-worker-8b9496955-6bsmq" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.808497 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cd8cddc-99bb-4e60-85e5-07d6090cfd49-config-data\") pod \"barbican-worker-8b9496955-6bsmq\" (UID: \"9cd8cddc-99bb-4e60-85e5-07d6090cfd49\") " pod="openstack/barbican-worker-8b9496955-6bsmq" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.808521 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9cd8cddc-99bb-4e60-85e5-07d6090cfd49-config-data-custom\") pod \"barbican-worker-8b9496955-6bsmq\" (UID: \"9cd8cddc-99bb-4e60-85e5-07d6090cfd49\") " pod="openstack/barbican-worker-8b9496955-6bsmq" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.808544 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08f67234-d648-4127-98d7-fcf00df7e1d3-combined-ca-bundle\") pod \"barbican-keystone-listener-749df8c57d-rd7dc\" (UID: \"08f67234-d648-4127-98d7-fcf00df7e1d3\") " pod="openstack/barbican-keystone-listener-749df8c57d-rd7dc" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.808585 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08f67234-d648-4127-98d7-fcf00df7e1d3-config-data\") pod \"barbican-keystone-listener-749df8c57d-rd7dc\" (UID: \"08f67234-d648-4127-98d7-fcf00df7e1d3\") " pod="openstack/barbican-keystone-listener-749df8c57d-rd7dc" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.808611 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cd8cddc-99bb-4e60-85e5-07d6090cfd49-combined-ca-bundle\") pod \"barbican-worker-8b9496955-6bsmq\" (UID: \"9cd8cddc-99bb-4e60-85e5-07d6090cfd49\") " pod="openstack/barbican-worker-8b9496955-6bsmq" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.810597 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08f67234-d648-4127-98d7-fcf00df7e1d3-logs\") pod \"barbican-keystone-listener-749df8c57d-rd7dc\" (UID: \"08f67234-d648-4127-98d7-fcf00df7e1d3\") " pod="openstack/barbican-keystone-listener-749df8c57d-rd7dc" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.832120 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-zst5s"] Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.836768 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08f67234-d648-4127-98d7-fcf00df7e1d3-combined-ca-bundle\") pod \"barbican-keystone-listener-749df8c57d-rd7dc\" (UID: \"08f67234-d648-4127-98d7-fcf00df7e1d3\") " pod="openstack/barbican-keystone-listener-749df8c57d-rd7dc" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.841196 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5966976bc8-gg2wx" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.858548 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08f67234-d648-4127-98d7-fcf00df7e1d3-config-data-custom\") pod \"barbican-keystone-listener-749df8c57d-rd7dc\" (UID: \"08f67234-d648-4127-98d7-fcf00df7e1d3\") " pod="openstack/barbican-keystone-listener-749df8c57d-rd7dc" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.861508 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pc52\" (UniqueName: \"kubernetes.io/projected/08f67234-d648-4127-98d7-fcf00df7e1d3-kube-api-access-9pc52\") pod \"barbican-keystone-listener-749df8c57d-rd7dc\" (UID: \"08f67234-d648-4127-98d7-fcf00df7e1d3\") " pod="openstack/barbican-keystone-listener-749df8c57d-rd7dc" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.892025 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08f67234-d648-4127-98d7-fcf00df7e1d3-config-data\") pod \"barbican-keystone-listener-749df8c57d-rd7dc\" (UID: \"08f67234-d648-4127-98d7-fcf00df7e1d3\") " pod="openstack/barbican-keystone-listener-749df8c57d-rd7dc" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.914047 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cd8cddc-99bb-4e60-85e5-07d6090cfd49-logs\") pod \"barbican-worker-8b9496955-6bsmq\" (UID: \"9cd8cddc-99bb-4e60-85e5-07d6090cfd49\") " pod="openstack/barbican-worker-8b9496955-6bsmq" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.914083 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/60620bf9-46f0-4b74-b019-a24815a64e3d-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-zst5s\" (UID: \"60620bf9-46f0-4b74-b019-a24815a64e3d\") " pod="openstack/dnsmasq-dns-85ff748b95-zst5s" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.914133 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60620bf9-46f0-4b74-b019-a24815a64e3d-dns-svc\") pod \"dnsmasq-dns-85ff748b95-zst5s\" (UID: \"60620bf9-46f0-4b74-b019-a24815a64e3d\") " pod="openstack/dnsmasq-dns-85ff748b95-zst5s" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.914172 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5k9v\" (UniqueName: \"kubernetes.io/projected/9cd8cddc-99bb-4e60-85e5-07d6090cfd49-kube-api-access-n5k9v\") pod \"barbican-worker-8b9496955-6bsmq\" (UID: \"9cd8cddc-99bb-4e60-85e5-07d6090cfd49\") " pod="openstack/barbican-worker-8b9496955-6bsmq" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.914209 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cd8cddc-99bb-4e60-85e5-07d6090cfd49-config-data\") pod \"barbican-worker-8b9496955-6bsmq\" (UID: \"9cd8cddc-99bb-4e60-85e5-07d6090cfd49\") " pod="openstack/barbican-worker-8b9496955-6bsmq" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.914223 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9cd8cddc-99bb-4e60-85e5-07d6090cfd49-config-data-custom\") pod \"barbican-worker-8b9496955-6bsmq\" (UID: \"9cd8cddc-99bb-4e60-85e5-07d6090cfd49\") " pod="openstack/barbican-worker-8b9496955-6bsmq" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.914241 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/60620bf9-46f0-4b74-b019-a24815a64e3d-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-zst5s\" (UID: \"60620bf9-46f0-4b74-b019-a24815a64e3d\") " pod="openstack/dnsmasq-dns-85ff748b95-zst5s" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.914259 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/60620bf9-46f0-4b74-b019-a24815a64e3d-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-zst5s\" (UID: \"60620bf9-46f0-4b74-b019-a24815a64e3d\") " pod="openstack/dnsmasq-dns-85ff748b95-zst5s" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.914279 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60620bf9-46f0-4b74-b019-a24815a64e3d-config\") pod \"dnsmasq-dns-85ff748b95-zst5s\" (UID: \"60620bf9-46f0-4b74-b019-a24815a64e3d\") " pod="openstack/dnsmasq-dns-85ff748b95-zst5s" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.914304 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cd8cddc-99bb-4e60-85e5-07d6090cfd49-combined-ca-bundle\") pod \"barbican-worker-8b9496955-6bsmq\" (UID: \"9cd8cddc-99bb-4e60-85e5-07d6090cfd49\") " pod="openstack/barbican-worker-8b9496955-6bsmq" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.914324 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66xvt\" (UniqueName: \"kubernetes.io/projected/60620bf9-46f0-4b74-b019-a24815a64e3d-kube-api-access-66xvt\") pod \"dnsmasq-dns-85ff748b95-zst5s\" (UID: \"60620bf9-46f0-4b74-b019-a24815a64e3d\") " pod="openstack/dnsmasq-dns-85ff748b95-zst5s" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.914814 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cd8cddc-99bb-4e60-85e5-07d6090cfd49-logs\") pod \"barbican-worker-8b9496955-6bsmq\" (UID: \"9cd8cddc-99bb-4e60-85e5-07d6090cfd49\") " pod="openstack/barbican-worker-8b9496955-6bsmq" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.930577 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9cd8cddc-99bb-4e60-85e5-07d6090cfd49-config-data-custom\") pod \"barbican-worker-8b9496955-6bsmq\" (UID: \"9cd8cddc-99bb-4e60-85e5-07d6090cfd49\") " pod="openstack/barbican-worker-8b9496955-6bsmq" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.942578 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cd8cddc-99bb-4e60-85e5-07d6090cfd49-config-data\") pod \"barbican-worker-8b9496955-6bsmq\" (UID: \"9cd8cddc-99bb-4e60-85e5-07d6090cfd49\") " pod="openstack/barbican-worker-8b9496955-6bsmq" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.958626 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5k9v\" (UniqueName: \"kubernetes.io/projected/9cd8cddc-99bb-4e60-85e5-07d6090cfd49-kube-api-access-n5k9v\") pod \"barbican-worker-8b9496955-6bsmq\" (UID: \"9cd8cddc-99bb-4e60-85e5-07d6090cfd49\") " pod="openstack/barbican-worker-8b9496955-6bsmq" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.961202 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cd8cddc-99bb-4e60-85e5-07d6090cfd49-combined-ca-bundle\") pod \"barbican-worker-8b9496955-6bsmq\" (UID: \"9cd8cddc-99bb-4e60-85e5-07d6090cfd49\") " pod="openstack/barbican-worker-8b9496955-6bsmq" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.978033 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-75f97b655d-lv8wc"] Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.980651 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75f97b655d-lv8wc" Feb 02 17:31:53 crc kubenswrapper[4858]: I0202 17:31:53.985265 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.015946 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60620bf9-46f0-4b74-b019-a24815a64e3d-dns-svc\") pod \"dnsmasq-dns-85ff748b95-zst5s\" (UID: \"60620bf9-46f0-4b74-b019-a24815a64e3d\") " pod="openstack/dnsmasq-dns-85ff748b95-zst5s" Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.016097 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/60620bf9-46f0-4b74-b019-a24815a64e3d-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-zst5s\" (UID: \"60620bf9-46f0-4b74-b019-a24815a64e3d\") " pod="openstack/dnsmasq-dns-85ff748b95-zst5s" Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.016115 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/60620bf9-46f0-4b74-b019-a24815a64e3d-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-zst5s\" (UID: \"60620bf9-46f0-4b74-b019-a24815a64e3d\") " pod="openstack/dnsmasq-dns-85ff748b95-zst5s" Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.016136 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60620bf9-46f0-4b74-b019-a24815a64e3d-config\") pod \"dnsmasq-dns-85ff748b95-zst5s\" (UID: \"60620bf9-46f0-4b74-b019-a24815a64e3d\") " pod="openstack/dnsmasq-dns-85ff748b95-zst5s" Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.016174 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66xvt\" (UniqueName: \"kubernetes.io/projected/60620bf9-46f0-4b74-b019-a24815a64e3d-kube-api-access-66xvt\") pod \"dnsmasq-dns-85ff748b95-zst5s\" (UID: \"60620bf9-46f0-4b74-b019-a24815a64e3d\") " pod="openstack/dnsmasq-dns-85ff748b95-zst5s" Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.016218 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/60620bf9-46f0-4b74-b019-a24815a64e3d-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-zst5s\" (UID: \"60620bf9-46f0-4b74-b019-a24815a64e3d\") " pod="openstack/dnsmasq-dns-85ff748b95-zst5s" Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.017316 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/60620bf9-46f0-4b74-b019-a24815a64e3d-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-zst5s\" (UID: \"60620bf9-46f0-4b74-b019-a24815a64e3d\") " pod="openstack/dnsmasq-dns-85ff748b95-zst5s" Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.017840 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/60620bf9-46f0-4b74-b019-a24815a64e3d-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-zst5s\" (UID: \"60620bf9-46f0-4b74-b019-a24815a64e3d\") " pod="openstack/dnsmasq-dns-85ff748b95-zst5s" Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.021126 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60620bf9-46f0-4b74-b019-a24815a64e3d-config\") pod \"dnsmasq-dns-85ff748b95-zst5s\" (UID: \"60620bf9-46f0-4b74-b019-a24815a64e3d\") " pod="openstack/dnsmasq-dns-85ff748b95-zst5s" Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.021668 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60620bf9-46f0-4b74-b019-a24815a64e3d-dns-svc\") pod \"dnsmasq-dns-85ff748b95-zst5s\" (UID: \"60620bf9-46f0-4b74-b019-a24815a64e3d\") " pod="openstack/dnsmasq-dns-85ff748b95-zst5s" Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.022047 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-749df8c57d-rd7dc" Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.023361 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-75f97b655d-lv8wc"] Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.035059 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/60620bf9-46f0-4b74-b019-a24815a64e3d-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-zst5s\" (UID: \"60620bf9-46f0-4b74-b019-a24815a64e3d\") " pod="openstack/dnsmasq-dns-85ff748b95-zst5s" Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.069375 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66xvt\" (UniqueName: \"kubernetes.io/projected/60620bf9-46f0-4b74-b019-a24815a64e3d-kube-api-access-66xvt\") pod \"dnsmasq-dns-85ff748b95-zst5s\" (UID: \"60620bf9-46f0-4b74-b019-a24815a64e3d\") " pod="openstack/dnsmasq-dns-85ff748b95-zst5s" Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.118081 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de5f94f8-30f4-4e19-8195-eb6a5b281de9-config-data-custom\") pod \"barbican-api-75f97b655d-lv8wc\" (UID: \"de5f94f8-30f4-4e19-8195-eb6a5b281de9\") " pod="openstack/barbican-api-75f97b655d-lv8wc" Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.118150 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5f94f8-30f4-4e19-8195-eb6a5b281de9-combined-ca-bundle\") pod \"barbican-api-75f97b655d-lv8wc\" (UID: \"de5f94f8-30f4-4e19-8195-eb6a5b281de9\") " pod="openstack/barbican-api-75f97b655d-lv8wc" Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.118186 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2hnq\" (UniqueName: \"kubernetes.io/projected/de5f94f8-30f4-4e19-8195-eb6a5b281de9-kube-api-access-d2hnq\") pod \"barbican-api-75f97b655d-lv8wc\" (UID: \"de5f94f8-30f4-4e19-8195-eb6a5b281de9\") " pod="openstack/barbican-api-75f97b655d-lv8wc" Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.118305 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de5f94f8-30f4-4e19-8195-eb6a5b281de9-config-data\") pod \"barbican-api-75f97b655d-lv8wc\" (UID: \"de5f94f8-30f4-4e19-8195-eb6a5b281de9\") " pod="openstack/barbican-api-75f97b655d-lv8wc" Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.118376 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de5f94f8-30f4-4e19-8195-eb6a5b281de9-logs\") pod \"barbican-api-75f97b655d-lv8wc\" (UID: \"de5f94f8-30f4-4e19-8195-eb6a5b281de9\") " pod="openstack/barbican-api-75f97b655d-lv8wc" Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.173085 4858 generic.go:334] "Generic (PLEG): container finished" podID="98af866c-3b91-4a5a-9c15-681572dbd5de" containerID="1a019deb67b1efd2ff52c92b521a30d3f0f95f9506d6b3031aed77c71d1069c8" exitCode=0 Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.173625 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-nz69z" event={"ID":"98af866c-3b91-4a5a-9c15-681572dbd5de","Type":"ContainerDied","Data":"1a019deb67b1efd2ff52c92b521a30d3f0f95f9506d6b3031aed77c71d1069c8"} Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.219480 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de5f94f8-30f4-4e19-8195-eb6a5b281de9-config-data\") pod \"barbican-api-75f97b655d-lv8wc\" (UID: \"de5f94f8-30f4-4e19-8195-eb6a5b281de9\") " pod="openstack/barbican-api-75f97b655d-lv8wc" Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.219573 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de5f94f8-30f4-4e19-8195-eb6a5b281de9-logs\") pod \"barbican-api-75f97b655d-lv8wc\" (UID: \"de5f94f8-30f4-4e19-8195-eb6a5b281de9\") " pod="openstack/barbican-api-75f97b655d-lv8wc" Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.219634 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de5f94f8-30f4-4e19-8195-eb6a5b281de9-config-data-custom\") pod \"barbican-api-75f97b655d-lv8wc\" (UID: \"de5f94f8-30f4-4e19-8195-eb6a5b281de9\") " pod="openstack/barbican-api-75f97b655d-lv8wc" Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.219676 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5f94f8-30f4-4e19-8195-eb6a5b281de9-combined-ca-bundle\") pod \"barbican-api-75f97b655d-lv8wc\" (UID: \"de5f94f8-30f4-4e19-8195-eb6a5b281de9\") " pod="openstack/barbican-api-75f97b655d-lv8wc" Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.219712 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2hnq\" (UniqueName: \"kubernetes.io/projected/de5f94f8-30f4-4e19-8195-eb6a5b281de9-kube-api-access-d2hnq\") pod \"barbican-api-75f97b655d-lv8wc\" (UID: \"de5f94f8-30f4-4e19-8195-eb6a5b281de9\") " pod="openstack/barbican-api-75f97b655d-lv8wc" Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.221603 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de5f94f8-30f4-4e19-8195-eb6a5b281de9-logs\") pod \"barbican-api-75f97b655d-lv8wc\" (UID: \"de5f94f8-30f4-4e19-8195-eb6a5b281de9\") " pod="openstack/barbican-api-75f97b655d-lv8wc" Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.231922 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de5f94f8-30f4-4e19-8195-eb6a5b281de9-config-data-custom\") pod \"barbican-api-75f97b655d-lv8wc\" (UID: \"de5f94f8-30f4-4e19-8195-eb6a5b281de9\") " pod="openstack/barbican-api-75f97b655d-lv8wc" Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.232462 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de5f94f8-30f4-4e19-8195-eb6a5b281de9-config-data\") pod \"barbican-api-75f97b655d-lv8wc\" (UID: \"de5f94f8-30f4-4e19-8195-eb6a5b281de9\") " pod="openstack/barbican-api-75f97b655d-lv8wc" Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.232837 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5f94f8-30f4-4e19-8195-eb6a5b281de9-combined-ca-bundle\") pod \"barbican-api-75f97b655d-lv8wc\" (UID: \"de5f94f8-30f4-4e19-8195-eb6a5b281de9\") " pod="openstack/barbican-api-75f97b655d-lv8wc" Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.246156 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2hnq\" (UniqueName: \"kubernetes.io/projected/de5f94f8-30f4-4e19-8195-eb6a5b281de9-kube-api-access-d2hnq\") pod \"barbican-api-75f97b655d-lv8wc\" (UID: \"de5f94f8-30f4-4e19-8195-eb6a5b281de9\") " pod="openstack/barbican-api-75f97b655d-lv8wc" Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.262684 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-8b9496955-6bsmq" Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.321252 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5877769c8-jgqfs"] Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.329019 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-zst5s" Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.366593 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75f97b655d-lv8wc" Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.590439 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-nz69z" Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.603044 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6fb4977965-lqqjm"] Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.631614 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98af866c-3b91-4a5a-9c15-681572dbd5de-dns-swift-storage-0\") pod \"98af866c-3b91-4a5a-9c15-681572dbd5de\" (UID: \"98af866c-3b91-4a5a-9c15-681572dbd5de\") " Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.631863 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98af866c-3b91-4a5a-9c15-681572dbd5de-ovsdbserver-sb\") pod \"98af866c-3b91-4a5a-9c15-681572dbd5de\" (UID: \"98af866c-3b91-4a5a-9c15-681572dbd5de\") " Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.631900 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98af866c-3b91-4a5a-9c15-681572dbd5de-dns-svc\") pod \"98af866c-3b91-4a5a-9c15-681572dbd5de\" (UID: \"98af866c-3b91-4a5a-9c15-681572dbd5de\") " Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.631942 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j226h\" (UniqueName: \"kubernetes.io/projected/98af866c-3b91-4a5a-9c15-681572dbd5de-kube-api-access-j226h\") pod \"98af866c-3b91-4a5a-9c15-681572dbd5de\" (UID: \"98af866c-3b91-4a5a-9c15-681572dbd5de\") " Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.632041 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98af866c-3b91-4a5a-9c15-681572dbd5de-ovsdbserver-nb\") pod \"98af866c-3b91-4a5a-9c15-681572dbd5de\" (UID: \"98af866c-3b91-4a5a-9c15-681572dbd5de\") " Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.632116 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98af866c-3b91-4a5a-9c15-681572dbd5de-config\") pod \"98af866c-3b91-4a5a-9c15-681572dbd5de\" (UID: \"98af866c-3b91-4a5a-9c15-681572dbd5de\") " Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.655565 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98af866c-3b91-4a5a-9c15-681572dbd5de-kube-api-access-j226h" (OuterVolumeSpecName: "kube-api-access-j226h") pod "98af866c-3b91-4a5a-9c15-681572dbd5de" (UID: "98af866c-3b91-4a5a-9c15-681572dbd5de"). InnerVolumeSpecName "kube-api-access-j226h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.734753 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j226h\" (UniqueName: \"kubernetes.io/projected/98af866c-3b91-4a5a-9c15-681572dbd5de-kube-api-access-j226h\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.838201 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98af866c-3b91-4a5a-9c15-681572dbd5de-config" (OuterVolumeSpecName: "config") pod "98af866c-3b91-4a5a-9c15-681572dbd5de" (UID: "98af866c-3b91-4a5a-9c15-681572dbd5de"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.838346 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98af866c-3b91-4a5a-9c15-681572dbd5de-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "98af866c-3b91-4a5a-9c15-681572dbd5de" (UID: "98af866c-3b91-4a5a-9c15-681572dbd5de"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.845295 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98af866c-3b91-4a5a-9c15-681572dbd5de-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "98af866c-3b91-4a5a-9c15-681572dbd5de" (UID: "98af866c-3b91-4a5a-9c15-681572dbd5de"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.865812 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98af866c-3b91-4a5a-9c15-681572dbd5de-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "98af866c-3b91-4a5a-9c15-681572dbd5de" (UID: "98af866c-3b91-4a5a-9c15-681572dbd5de"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.904343 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98af866c-3b91-4a5a-9c15-681572dbd5de-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "98af866c-3b91-4a5a-9c15-681572dbd5de" (UID: "98af866c-3b91-4a5a-9c15-681572dbd5de"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.938366 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98af866c-3b91-4a5a-9c15-681572dbd5de-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.938393 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98af866c-3b91-4a5a-9c15-681572dbd5de-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.938404 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98af866c-3b91-4a5a-9c15-681572dbd5de-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.938412 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98af866c-3b91-4a5a-9c15-681572dbd5de-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:54 crc kubenswrapper[4858]: I0202 17:31:54.938419 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98af866c-3b91-4a5a-9c15-681572dbd5de-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 17:31:55 crc kubenswrapper[4858]: I0202 17:31:55.052714 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-749df8c57d-rd7dc"] Feb 02 17:31:55 crc kubenswrapper[4858]: I0202 17:31:55.091190 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5966976bc8-gg2wx"] Feb 02 17:31:55 crc kubenswrapper[4858]: I0202 17:31:55.111733 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-786777f949-6vglz"] Feb 02 17:31:55 crc kubenswrapper[4858]: W0202 17:31:55.136468 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf98781ea_94a7_4f21_975e_4a8d48fad122.slice/crio-09a0f28cff511802cd5eed955ec5bcbde7f110a1241b90fc7f4dfc6aa73b78c6 WatchSource:0}: Error finding container 09a0f28cff511802cd5eed955ec5bcbde7f110a1241b90fc7f4dfc6aa73b78c6: Status 404 returned error can't find the container with id 09a0f28cff511802cd5eed955ec5bcbde7f110a1241b90fc7f4dfc6aa73b78c6 Feb 02 17:31:55 crc kubenswrapper[4858]: I0202 17:31:55.242077 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-nz69z" event={"ID":"98af866c-3b91-4a5a-9c15-681572dbd5de","Type":"ContainerDied","Data":"6c349d35d215322cefc39d48346b96e3b80697ba6fb94044062b1bdf932ae4a1"} Feb 02 17:31:55 crc kubenswrapper[4858]: I0202 17:31:55.242123 4858 scope.go:117] "RemoveContainer" containerID="1a019deb67b1efd2ff52c92b521a30d3f0f95f9506d6b3031aed77c71d1069c8" Feb 02 17:31:55 crc kubenswrapper[4858]: I0202 17:31:55.242255 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-nz69z" Feb 02 17:31:55 crc kubenswrapper[4858]: I0202 17:31:55.261256 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5877769c8-jgqfs" event={"ID":"72c71bde-c7f7-4e51-955d-e9a808664d2a","Type":"ContainerStarted","Data":"f11b6accc3bda352b71cd450a066557c0c3998983364e31d72eeee29df6e77d4"} Feb 02 17:31:55 crc kubenswrapper[4858]: I0202 17:31:55.261294 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5877769c8-jgqfs" event={"ID":"72c71bde-c7f7-4e51-955d-e9a808664d2a","Type":"ContainerStarted","Data":"85a87db9ad2abb32a6ad3246910c58b499e468c4fb1bf4eb5959d93e2dd34a90"} Feb 02 17:31:55 crc kubenswrapper[4858]: I0202 17:31:55.263727 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-749df8c57d-rd7dc" event={"ID":"08f67234-d648-4127-98d7-fcf00df7e1d3","Type":"ContainerStarted","Data":"7d7871dfb9469bcfb306bb5f0e34f211ecc6eed365e8a433faaca2e91d67b25c"} Feb 02 17:31:55 crc kubenswrapper[4858]: I0202 17:31:55.269652 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6fb4977965-lqqjm" event={"ID":"e92af156-c3ae-4bdc-bf59-b07c51dbaef6","Type":"ContainerStarted","Data":"ffa6aa6fc2521143eb0d8eabd1c5c42c53b1d98f72b1e50ed6e7cdbb81937eb9"} Feb 02 17:31:55 crc kubenswrapper[4858]: I0202 17:31:55.269695 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6fb4977965-lqqjm" event={"ID":"e92af156-c3ae-4bdc-bf59-b07c51dbaef6","Type":"ContainerStarted","Data":"dc4b27b85107271e242168a2df30b42592c78f9deeeb1791320c239045752239"} Feb 02 17:31:55 crc kubenswrapper[4858]: I0202 17:31:55.270679 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6fb4977965-lqqjm" Feb 02 17:31:55 crc kubenswrapper[4858]: I0202 17:31:55.282635 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-786777f949-6vglz" event={"ID":"f98781ea-94a7-4f21-975e-4a8d48fad122","Type":"ContainerStarted","Data":"09a0f28cff511802cd5eed955ec5bcbde7f110a1241b90fc7f4dfc6aa73b78c6"} Feb 02 17:31:55 crc kubenswrapper[4858]: I0202 17:31:55.286416 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-nz69z"] Feb 02 17:31:55 crc kubenswrapper[4858]: I0202 17:31:55.287281 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5966976bc8-gg2wx" event={"ID":"f82a4143-d161-4bab-ba3a-bc426cfe53b9","Type":"ContainerStarted","Data":"4856a30a14a00e3573cf2746cede5b443a0763c9768cecf59d9d1dbc975df9a8"} Feb 02 17:31:55 crc kubenswrapper[4858]: I0202 17:31:55.293836 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-nz69z"] Feb 02 17:31:55 crc kubenswrapper[4858]: I0202 17:31:55.309806 4858 scope.go:117] "RemoveContainer" containerID="6c925973ee6e3ea76ddad42ea5bf6d98dcf5fe4cbd849cf7cb3dc4c05ccdd051" Feb 02 17:31:55 crc kubenswrapper[4858]: I0202 17:31:55.345461 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6fb4977965-lqqjm" podStartSLOduration=2.345441832 podStartE2EDuration="2.345441832s" podCreationTimestamp="2026-02-02 17:31:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:31:55.313724255 +0000 UTC m=+1016.466139530" watchObservedRunningTime="2026-02-02 17:31:55.345441832 +0000 UTC m=+1016.497857107" Feb 02 17:31:55 crc kubenswrapper[4858]: I0202 17:31:55.361230 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-75f97b655d-lv8wc"] Feb 02 17:31:55 crc kubenswrapper[4858]: I0202 17:31:55.383238 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-zst5s"] Feb 02 17:31:55 crc kubenswrapper[4858]: I0202 17:31:55.423682 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-8b9496955-6bsmq"] Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.297617 4858 generic.go:334] "Generic (PLEG): container finished" podID="60620bf9-46f0-4b74-b019-a24815a64e3d" containerID="b3801a8076b2612ad9fe1e530c14b9f14ccc87591afa9532e5df12f6a9a5ab88" exitCode=0 Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.298454 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-zst5s" event={"ID":"60620bf9-46f0-4b74-b019-a24815a64e3d","Type":"ContainerDied","Data":"b3801a8076b2612ad9fe1e530c14b9f14ccc87591afa9532e5df12f6a9a5ab88"} Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.299450 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-zst5s" event={"ID":"60620bf9-46f0-4b74-b019-a24815a64e3d","Type":"ContainerStarted","Data":"f40eb0679670668dbe95928d8f4e28aeafc03d31a6e886c90fb54fb49808927d"} Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.303138 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75f97b655d-lv8wc" event={"ID":"de5f94f8-30f4-4e19-8195-eb6a5b281de9","Type":"ContainerStarted","Data":"540513034c786c00b6429acad3e30eb43b0270445fb7b440a654639682246522"} Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.303183 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-75f97b655d-lv8wc" Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.303226 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75f97b655d-lv8wc" event={"ID":"de5f94f8-30f4-4e19-8195-eb6a5b281de9","Type":"ContainerStarted","Data":"9cdf417ee38737aebb2ded473bfa0470de4201c406e99c606db9f406d2f66242"} Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.303241 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75f97b655d-lv8wc" event={"ID":"de5f94f8-30f4-4e19-8195-eb6a5b281de9","Type":"ContainerStarted","Data":"f344e047124d4ad7d2073e3a6242a0ac5130802186f0c74b2ebd1bd621f66e28"} Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.303258 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-75f97b655d-lv8wc" Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.308621 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8b9496955-6bsmq" event={"ID":"9cd8cddc-99bb-4e60-85e5-07d6090cfd49","Type":"ContainerStarted","Data":"7d785defb0cb2e83bb115d46e98ff69f8d4244996e77a6db21fd28f3fe282aef"} Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.318621 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5877769c8-jgqfs" event={"ID":"72c71bde-c7f7-4e51-955d-e9a808664d2a","Type":"ContainerStarted","Data":"21da8d326ef04a2cef068e5d8c2a31a29d7d76ead36bd218d9d80d6f6a1691a7"} Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.318669 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5877769c8-jgqfs" Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.318702 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5877769c8-jgqfs" Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.350594 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-75f97b655d-lv8wc" podStartSLOduration=3.350575885 podStartE2EDuration="3.350575885s" podCreationTimestamp="2026-02-02 17:31:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:31:56.340048904 +0000 UTC m=+1017.492464199" watchObservedRunningTime="2026-02-02 17:31:56.350575885 +0000 UTC m=+1017.502991150" Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.369599 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5877769c8-jgqfs" podStartSLOduration=4.369579908 podStartE2EDuration="4.369579908s" podCreationTimestamp="2026-02-02 17:31:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:31:56.359207092 +0000 UTC m=+1017.511622377" watchObservedRunningTime="2026-02-02 17:31:56.369579908 +0000 UTC m=+1017.521995173" Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.418251 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98af866c-3b91-4a5a-9c15-681572dbd5de" path="/var/lib/kubelet/pods/98af866c-3b91-4a5a-9c15-681572dbd5de/volumes" Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.551804 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-b4fd7664d-fqkmq"] Feb 02 17:31:56 crc kubenswrapper[4858]: E0202 17:31:56.552229 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98af866c-3b91-4a5a-9c15-681572dbd5de" containerName="dnsmasq-dns" Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.552256 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="98af866c-3b91-4a5a-9c15-681572dbd5de" containerName="dnsmasq-dns" Feb 02 17:31:56 crc kubenswrapper[4858]: E0202 17:31:56.552269 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98af866c-3b91-4a5a-9c15-681572dbd5de" containerName="init" Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.552275 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="98af866c-3b91-4a5a-9c15-681572dbd5de" containerName="init" Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.552430 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="98af866c-3b91-4a5a-9c15-681572dbd5de" containerName="dnsmasq-dns" Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.553365 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b4fd7664d-fqkmq" Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.573840 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/113e6fbe-f0ce-497b-8a16-fb8bc217b584-logs\") pod \"placement-b4fd7664d-fqkmq\" (UID: \"113e6fbe-f0ce-497b-8a16-fb8bc217b584\") " pod="openstack/placement-b4fd7664d-fqkmq" Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.573932 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/113e6fbe-f0ce-497b-8a16-fb8bc217b584-public-tls-certs\") pod \"placement-b4fd7664d-fqkmq\" (UID: \"113e6fbe-f0ce-497b-8a16-fb8bc217b584\") " pod="openstack/placement-b4fd7664d-fqkmq" Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.574062 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/113e6fbe-f0ce-497b-8a16-fb8bc217b584-internal-tls-certs\") pod \"placement-b4fd7664d-fqkmq\" (UID: \"113e6fbe-f0ce-497b-8a16-fb8bc217b584\") " pod="openstack/placement-b4fd7664d-fqkmq" Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.574116 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/113e6fbe-f0ce-497b-8a16-fb8bc217b584-combined-ca-bundle\") pod \"placement-b4fd7664d-fqkmq\" (UID: \"113e6fbe-f0ce-497b-8a16-fb8bc217b584\") " pod="openstack/placement-b4fd7664d-fqkmq" Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.574240 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvbmw\" (UniqueName: \"kubernetes.io/projected/113e6fbe-f0ce-497b-8a16-fb8bc217b584-kube-api-access-tvbmw\") pod \"placement-b4fd7664d-fqkmq\" (UID: \"113e6fbe-f0ce-497b-8a16-fb8bc217b584\") " pod="openstack/placement-b4fd7664d-fqkmq" Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.574328 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/113e6fbe-f0ce-497b-8a16-fb8bc217b584-scripts\") pod \"placement-b4fd7664d-fqkmq\" (UID: \"113e6fbe-f0ce-497b-8a16-fb8bc217b584\") " pod="openstack/placement-b4fd7664d-fqkmq" Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.574379 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/113e6fbe-f0ce-497b-8a16-fb8bc217b584-config-data\") pod \"placement-b4fd7664d-fqkmq\" (UID: \"113e6fbe-f0ce-497b-8a16-fb8bc217b584\") " pod="openstack/placement-b4fd7664d-fqkmq" Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.578247 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-b4fd7664d-fqkmq"] Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.679492 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/113e6fbe-f0ce-497b-8a16-fb8bc217b584-logs\") pod \"placement-b4fd7664d-fqkmq\" (UID: \"113e6fbe-f0ce-497b-8a16-fb8bc217b584\") " pod="openstack/placement-b4fd7664d-fqkmq" Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.679890 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/113e6fbe-f0ce-497b-8a16-fb8bc217b584-logs\") pod \"placement-b4fd7664d-fqkmq\" (UID: \"113e6fbe-f0ce-497b-8a16-fb8bc217b584\") " pod="openstack/placement-b4fd7664d-fqkmq" Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.679900 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/113e6fbe-f0ce-497b-8a16-fb8bc217b584-public-tls-certs\") pod \"placement-b4fd7664d-fqkmq\" (UID: \"113e6fbe-f0ce-497b-8a16-fb8bc217b584\") " pod="openstack/placement-b4fd7664d-fqkmq" Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.680071 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/113e6fbe-f0ce-497b-8a16-fb8bc217b584-internal-tls-certs\") pod \"placement-b4fd7664d-fqkmq\" (UID: \"113e6fbe-f0ce-497b-8a16-fb8bc217b584\") " pod="openstack/placement-b4fd7664d-fqkmq" Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.680104 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/113e6fbe-f0ce-497b-8a16-fb8bc217b584-combined-ca-bundle\") pod \"placement-b4fd7664d-fqkmq\" (UID: \"113e6fbe-f0ce-497b-8a16-fb8bc217b584\") " pod="openstack/placement-b4fd7664d-fqkmq" Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.680210 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvbmw\" (UniqueName: \"kubernetes.io/projected/113e6fbe-f0ce-497b-8a16-fb8bc217b584-kube-api-access-tvbmw\") pod \"placement-b4fd7664d-fqkmq\" (UID: \"113e6fbe-f0ce-497b-8a16-fb8bc217b584\") " pod="openstack/placement-b4fd7664d-fqkmq" Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.680250 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/113e6fbe-f0ce-497b-8a16-fb8bc217b584-scripts\") pod \"placement-b4fd7664d-fqkmq\" (UID: \"113e6fbe-f0ce-497b-8a16-fb8bc217b584\") " pod="openstack/placement-b4fd7664d-fqkmq" Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.680295 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/113e6fbe-f0ce-497b-8a16-fb8bc217b584-config-data\") pod \"placement-b4fd7664d-fqkmq\" (UID: \"113e6fbe-f0ce-497b-8a16-fb8bc217b584\") " pod="openstack/placement-b4fd7664d-fqkmq" Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.685935 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/113e6fbe-f0ce-497b-8a16-fb8bc217b584-public-tls-certs\") pod \"placement-b4fd7664d-fqkmq\" (UID: \"113e6fbe-f0ce-497b-8a16-fb8bc217b584\") " pod="openstack/placement-b4fd7664d-fqkmq" Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.690824 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/113e6fbe-f0ce-497b-8a16-fb8bc217b584-config-data\") pod \"placement-b4fd7664d-fqkmq\" (UID: \"113e6fbe-f0ce-497b-8a16-fb8bc217b584\") " pod="openstack/placement-b4fd7664d-fqkmq" Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.692233 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/113e6fbe-f0ce-497b-8a16-fb8bc217b584-combined-ca-bundle\") pod \"placement-b4fd7664d-fqkmq\" (UID: \"113e6fbe-f0ce-497b-8a16-fb8bc217b584\") " pod="openstack/placement-b4fd7664d-fqkmq" Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.696395 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/113e6fbe-f0ce-497b-8a16-fb8bc217b584-scripts\") pod \"placement-b4fd7664d-fqkmq\" (UID: \"113e6fbe-f0ce-497b-8a16-fb8bc217b584\") " pod="openstack/placement-b4fd7664d-fqkmq" Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.696806 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/113e6fbe-f0ce-497b-8a16-fb8bc217b584-internal-tls-certs\") pod \"placement-b4fd7664d-fqkmq\" (UID: \"113e6fbe-f0ce-497b-8a16-fb8bc217b584\") " pod="openstack/placement-b4fd7664d-fqkmq" Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.703562 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvbmw\" (UniqueName: \"kubernetes.io/projected/113e6fbe-f0ce-497b-8a16-fb8bc217b584-kube-api-access-tvbmw\") pod \"placement-b4fd7664d-fqkmq\" (UID: \"113e6fbe-f0ce-497b-8a16-fb8bc217b584\") " pod="openstack/placement-b4fd7664d-fqkmq" Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.913457 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b4fd7664d-fqkmq" Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.937184 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-59c6cb6f96-ss676"] Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.938684 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-59c6cb6f96-ss676" Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.941651 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.946072 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.960904 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-59c6cb6f96-ss676"] Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.987340 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dbf9cae-d42c-47ae-b117-3fd56628b72f-combined-ca-bundle\") pod \"barbican-api-59c6cb6f96-ss676\" (UID: \"8dbf9cae-d42c-47ae-b117-3fd56628b72f\") " pod="openstack/barbican-api-59c6cb6f96-ss676" Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.987472 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8dbf9cae-d42c-47ae-b117-3fd56628b72f-config-data-custom\") pod \"barbican-api-59c6cb6f96-ss676\" (UID: \"8dbf9cae-d42c-47ae-b117-3fd56628b72f\") " pod="openstack/barbican-api-59c6cb6f96-ss676" Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.987501 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dbf9cae-d42c-47ae-b117-3fd56628b72f-config-data\") pod \"barbican-api-59c6cb6f96-ss676\" (UID: \"8dbf9cae-d42c-47ae-b117-3fd56628b72f\") " pod="openstack/barbican-api-59c6cb6f96-ss676" Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.987526 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dbf9cae-d42c-47ae-b117-3fd56628b72f-public-tls-certs\") pod \"barbican-api-59c6cb6f96-ss676\" (UID: \"8dbf9cae-d42c-47ae-b117-3fd56628b72f\") " pod="openstack/barbican-api-59c6cb6f96-ss676" Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.987610 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8dbf9cae-d42c-47ae-b117-3fd56628b72f-logs\") pod \"barbican-api-59c6cb6f96-ss676\" (UID: \"8dbf9cae-d42c-47ae-b117-3fd56628b72f\") " pod="openstack/barbican-api-59c6cb6f96-ss676" Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.987633 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dbf9cae-d42c-47ae-b117-3fd56628b72f-internal-tls-certs\") pod \"barbican-api-59c6cb6f96-ss676\" (UID: \"8dbf9cae-d42c-47ae-b117-3fd56628b72f\") " pod="openstack/barbican-api-59c6cb6f96-ss676" Feb 02 17:31:56 crc kubenswrapper[4858]: I0202 17:31:56.987672 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjwhs\" (UniqueName: \"kubernetes.io/projected/8dbf9cae-d42c-47ae-b117-3fd56628b72f-kube-api-access-sjwhs\") pod \"barbican-api-59c6cb6f96-ss676\" (UID: \"8dbf9cae-d42c-47ae-b117-3fd56628b72f\") " pod="openstack/barbican-api-59c6cb6f96-ss676" Feb 02 17:31:57 crc kubenswrapper[4858]: I0202 17:31:57.089169 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8dbf9cae-d42c-47ae-b117-3fd56628b72f-logs\") pod \"barbican-api-59c6cb6f96-ss676\" (UID: \"8dbf9cae-d42c-47ae-b117-3fd56628b72f\") " pod="openstack/barbican-api-59c6cb6f96-ss676" Feb 02 17:31:57 crc kubenswrapper[4858]: I0202 17:31:57.089416 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dbf9cae-d42c-47ae-b117-3fd56628b72f-internal-tls-certs\") pod \"barbican-api-59c6cb6f96-ss676\" (UID: \"8dbf9cae-d42c-47ae-b117-3fd56628b72f\") " pod="openstack/barbican-api-59c6cb6f96-ss676" Feb 02 17:31:57 crc kubenswrapper[4858]: I0202 17:31:57.089448 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjwhs\" (UniqueName: \"kubernetes.io/projected/8dbf9cae-d42c-47ae-b117-3fd56628b72f-kube-api-access-sjwhs\") pod \"barbican-api-59c6cb6f96-ss676\" (UID: \"8dbf9cae-d42c-47ae-b117-3fd56628b72f\") " pod="openstack/barbican-api-59c6cb6f96-ss676" Feb 02 17:31:57 crc kubenswrapper[4858]: I0202 17:31:57.089502 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dbf9cae-d42c-47ae-b117-3fd56628b72f-combined-ca-bundle\") pod \"barbican-api-59c6cb6f96-ss676\" (UID: \"8dbf9cae-d42c-47ae-b117-3fd56628b72f\") " pod="openstack/barbican-api-59c6cb6f96-ss676" Feb 02 17:31:57 crc kubenswrapper[4858]: I0202 17:31:57.089565 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8dbf9cae-d42c-47ae-b117-3fd56628b72f-config-data-custom\") pod \"barbican-api-59c6cb6f96-ss676\" (UID: \"8dbf9cae-d42c-47ae-b117-3fd56628b72f\") " pod="openstack/barbican-api-59c6cb6f96-ss676" Feb 02 17:31:57 crc kubenswrapper[4858]: I0202 17:31:57.089576 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8dbf9cae-d42c-47ae-b117-3fd56628b72f-logs\") pod \"barbican-api-59c6cb6f96-ss676\" (UID: \"8dbf9cae-d42c-47ae-b117-3fd56628b72f\") " pod="openstack/barbican-api-59c6cb6f96-ss676" Feb 02 17:31:57 crc kubenswrapper[4858]: I0202 17:31:57.089582 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dbf9cae-d42c-47ae-b117-3fd56628b72f-config-data\") pod \"barbican-api-59c6cb6f96-ss676\" (UID: \"8dbf9cae-d42c-47ae-b117-3fd56628b72f\") " pod="openstack/barbican-api-59c6cb6f96-ss676" Feb 02 17:31:57 crc kubenswrapper[4858]: I0202 17:31:57.089639 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dbf9cae-d42c-47ae-b117-3fd56628b72f-public-tls-certs\") pod \"barbican-api-59c6cb6f96-ss676\" (UID: \"8dbf9cae-d42c-47ae-b117-3fd56628b72f\") " pod="openstack/barbican-api-59c6cb6f96-ss676" Feb 02 17:31:57 crc kubenswrapper[4858]: I0202 17:31:57.092677 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dbf9cae-d42c-47ae-b117-3fd56628b72f-internal-tls-certs\") pod \"barbican-api-59c6cb6f96-ss676\" (UID: \"8dbf9cae-d42c-47ae-b117-3fd56628b72f\") " pod="openstack/barbican-api-59c6cb6f96-ss676" Feb 02 17:31:57 crc kubenswrapper[4858]: I0202 17:31:57.093470 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dbf9cae-d42c-47ae-b117-3fd56628b72f-combined-ca-bundle\") pod \"barbican-api-59c6cb6f96-ss676\" (UID: \"8dbf9cae-d42c-47ae-b117-3fd56628b72f\") " pod="openstack/barbican-api-59c6cb6f96-ss676" Feb 02 17:31:57 crc kubenswrapper[4858]: I0202 17:31:57.095007 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dbf9cae-d42c-47ae-b117-3fd56628b72f-public-tls-certs\") pod \"barbican-api-59c6cb6f96-ss676\" (UID: \"8dbf9cae-d42c-47ae-b117-3fd56628b72f\") " pod="openstack/barbican-api-59c6cb6f96-ss676" Feb 02 17:31:57 crc kubenswrapper[4858]: I0202 17:31:57.095560 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dbf9cae-d42c-47ae-b117-3fd56628b72f-config-data\") pod \"barbican-api-59c6cb6f96-ss676\" (UID: \"8dbf9cae-d42c-47ae-b117-3fd56628b72f\") " pod="openstack/barbican-api-59c6cb6f96-ss676" Feb 02 17:31:57 crc kubenswrapper[4858]: I0202 17:31:57.110615 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjwhs\" (UniqueName: \"kubernetes.io/projected/8dbf9cae-d42c-47ae-b117-3fd56628b72f-kube-api-access-sjwhs\") pod \"barbican-api-59c6cb6f96-ss676\" (UID: \"8dbf9cae-d42c-47ae-b117-3fd56628b72f\") " pod="openstack/barbican-api-59c6cb6f96-ss676" Feb 02 17:31:57 crc kubenswrapper[4858]: I0202 17:31:57.110816 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8dbf9cae-d42c-47ae-b117-3fd56628b72f-config-data-custom\") pod \"barbican-api-59c6cb6f96-ss676\" (UID: \"8dbf9cae-d42c-47ae-b117-3fd56628b72f\") " pod="openstack/barbican-api-59c6cb6f96-ss676" Feb 02 17:31:57 crc kubenswrapper[4858]: I0202 17:31:57.245693 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 02 17:31:57 crc kubenswrapper[4858]: I0202 17:31:57.245764 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 02 17:31:57 crc kubenswrapper[4858]: I0202 17:31:57.245783 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 02 17:31:57 crc kubenswrapper[4858]: I0202 17:31:57.245795 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 02 17:31:57 crc kubenswrapper[4858]: I0202 17:31:57.277787 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-59c6cb6f96-ss676" Feb 02 17:31:57 crc kubenswrapper[4858]: I0202 17:31:57.293550 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 02 17:31:57 crc kubenswrapper[4858]: I0202 17:31:57.294902 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 02 17:31:57 crc kubenswrapper[4858]: I0202 17:31:57.350582 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6xt8q" event={"ID":"d5a9fadc-338f-44bb-8ebd-bc4fe01972bf","Type":"ContainerStarted","Data":"ad3d10a4a57038497de79a193804b1dd2faaed6dba90621eadb67952f0f1019a"} Feb 02 17:31:57 crc kubenswrapper[4858]: I0202 17:31:57.382260 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-6xt8q" podStartSLOduration=3.186422197 podStartE2EDuration="47.382245788s" podCreationTimestamp="2026-02-02 17:31:10 +0000 UTC" firstStartedPulling="2026-02-02 17:31:11.759505336 +0000 UTC m=+972.911920601" lastFinishedPulling="2026-02-02 17:31:55.955328927 +0000 UTC m=+1017.107744192" observedRunningTime="2026-02-02 17:31:57.37846123 +0000 UTC m=+1018.530876515" watchObservedRunningTime="2026-02-02 17:31:57.382245788 +0000 UTC m=+1018.534661053" Feb 02 17:31:57 crc kubenswrapper[4858]: I0202 17:31:57.808591 4858 patch_prober.go:28] interesting pod/machine-config-daemon-lbvl2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 17:31:57 crc kubenswrapper[4858]: I0202 17:31:57.808964 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 17:31:58 crc kubenswrapper[4858]: I0202 17:31:58.583752 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-b4fd7664d-fqkmq"] Feb 02 17:31:58 crc kubenswrapper[4858]: W0202 17:31:58.591723 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod113e6fbe_f0ce_497b_8a16_fb8bc217b584.slice/crio-e5e2b9f954e7ea005bb98d79ee7b8f1779dcacf1c1df26ac3f12b9ad84b0cb88 WatchSource:0}: Error finding container e5e2b9f954e7ea005bb98d79ee7b8f1779dcacf1c1df26ac3f12b9ad84b0cb88: Status 404 returned error can't find the container with id e5e2b9f954e7ea005bb98d79ee7b8f1779dcacf1c1df26ac3f12b9ad84b0cb88 Feb 02 17:31:58 crc kubenswrapper[4858]: I0202 17:31:58.714948 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-59c6cb6f96-ss676"] Feb 02 17:31:59 crc kubenswrapper[4858]: I0202 17:31:59.424238 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59c6cb6f96-ss676" event={"ID":"8dbf9cae-d42c-47ae-b117-3fd56628b72f","Type":"ContainerStarted","Data":"0d49be850652cd709be978cadde772abcaa151fdc5b19296f31fc93ee99c4041"} Feb 02 17:31:59 crc kubenswrapper[4858]: I0202 17:31:59.424595 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59c6cb6f96-ss676" event={"ID":"8dbf9cae-d42c-47ae-b117-3fd56628b72f","Type":"ContainerStarted","Data":"a949e75cac3f3ef40c0a134cbcc8e294641c52e8aed1cc32a3ba757cc9ae1ee0"} Feb 02 17:31:59 crc kubenswrapper[4858]: I0202 17:31:59.424614 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59c6cb6f96-ss676" event={"ID":"8dbf9cae-d42c-47ae-b117-3fd56628b72f","Type":"ContainerStarted","Data":"2c1453d1f5e4a1a159cc6f009c9c21c520e5760bc9508262f14261ab82b99c07"} Feb 02 17:31:59 crc kubenswrapper[4858]: I0202 17:31:59.426127 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-59c6cb6f96-ss676" Feb 02 17:31:59 crc kubenswrapper[4858]: I0202 17:31:59.426180 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-59c6cb6f96-ss676" Feb 02 17:31:59 crc kubenswrapper[4858]: I0202 17:31:59.429834 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5966976bc8-gg2wx" event={"ID":"f82a4143-d161-4bab-ba3a-bc426cfe53b9","Type":"ContainerStarted","Data":"e8dc681d14696363361bca939cdaa80210833cd991353d34830952e0e7343cfd"} Feb 02 17:31:59 crc kubenswrapper[4858]: I0202 17:31:59.429877 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5966976bc8-gg2wx" event={"ID":"f82a4143-d161-4bab-ba3a-bc426cfe53b9","Type":"ContainerStarted","Data":"d354ad67c3b81d0d823086864b4aed743814d4c4a5355d8b9060b57e865d3a94"} Feb 02 17:31:59 crc kubenswrapper[4858]: I0202 17:31:59.437491 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8b9496955-6bsmq" event={"ID":"9cd8cddc-99bb-4e60-85e5-07d6090cfd49","Type":"ContainerStarted","Data":"3630d6e902531f83289acaa8ef31823663691af8d76dc93163f1252d121c7cd8"} Feb 02 17:31:59 crc kubenswrapper[4858]: I0202 17:31:59.437552 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8b9496955-6bsmq" event={"ID":"9cd8cddc-99bb-4e60-85e5-07d6090cfd49","Type":"ContainerStarted","Data":"50bc372f099873e1083a707f393191afe6467724ff475293998300c2311772d2"} Feb 02 17:31:59 crc kubenswrapper[4858]: I0202 17:31:59.440075 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-749df8c57d-rd7dc" event={"ID":"08f67234-d648-4127-98d7-fcf00df7e1d3","Type":"ContainerStarted","Data":"25a0d9f1550bb43f0f0edba4bfa6b26d4dfdabfab7c20c60955bc26fc7c4813a"} Feb 02 17:31:59 crc kubenswrapper[4858]: I0202 17:31:59.440119 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-749df8c57d-rd7dc" event={"ID":"08f67234-d648-4127-98d7-fcf00df7e1d3","Type":"ContainerStarted","Data":"503b4da13f0d934d1467fbf1c96c7f7aa4de31e36ef17fc408e6b0d87bd55cc0"} Feb 02 17:31:59 crc kubenswrapper[4858]: I0202 17:31:59.442310 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-zst5s" event={"ID":"60620bf9-46f0-4b74-b019-a24815a64e3d","Type":"ContainerStarted","Data":"e7f1dca080c1c44b8f46e055cfcf0cf379f9192cfa71726e6d42866e3e3398f7"} Feb 02 17:31:59 crc kubenswrapper[4858]: I0202 17:31:59.442835 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-zst5s" Feb 02 17:31:59 crc kubenswrapper[4858]: I0202 17:31:59.446344 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-786777f949-6vglz" event={"ID":"f98781ea-94a7-4f21-975e-4a8d48fad122","Type":"ContainerStarted","Data":"1e3575ed04b22b4e0374f08cedb28cf00e6eeca424bf3e96ac9f6b3af36a7a04"} Feb 02 17:31:59 crc kubenswrapper[4858]: I0202 17:31:59.446403 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-786777f949-6vglz" event={"ID":"f98781ea-94a7-4f21-975e-4a8d48fad122","Type":"ContainerStarted","Data":"984b6c054fb790a1b9581084f781ecaece762c78af5fb02b0c9c46a0b92b75af"} Feb 02 17:31:59 crc kubenswrapper[4858]: I0202 17:31:59.463176 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-59c6cb6f96-ss676" podStartSLOduration=3.463155876 podStartE2EDuration="3.463155876s" podCreationTimestamp="2026-02-02 17:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:31:59.450988128 +0000 UTC m=+1020.603403403" watchObservedRunningTime="2026-02-02 17:31:59.463155876 +0000 UTC m=+1020.615571141" Feb 02 17:31:59 crc kubenswrapper[4858]: I0202 17:31:59.467845 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b4fd7664d-fqkmq" event={"ID":"113e6fbe-f0ce-497b-8a16-fb8bc217b584","Type":"ContainerStarted","Data":"56101885eb8b7df7c662b6c45c5d26653e6790cbbee6000ff7c46407a7b348e0"} Feb 02 17:31:59 crc kubenswrapper[4858]: I0202 17:31:59.467898 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b4fd7664d-fqkmq" event={"ID":"113e6fbe-f0ce-497b-8a16-fb8bc217b584","Type":"ContainerStarted","Data":"f521b1b670070370753f6ebe6564cf5036f2ddf874d82a62a92db779a6f75d08"} Feb 02 17:31:59 crc kubenswrapper[4858]: I0202 17:31:59.467911 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b4fd7664d-fqkmq" event={"ID":"113e6fbe-f0ce-497b-8a16-fb8bc217b584","Type":"ContainerStarted","Data":"e5e2b9f954e7ea005bb98d79ee7b8f1779dcacf1c1df26ac3f12b9ad84b0cb88"} Feb 02 17:31:59 crc kubenswrapper[4858]: I0202 17:31:59.468255 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-b4fd7664d-fqkmq" Feb 02 17:31:59 crc kubenswrapper[4858]: I0202 17:31:59.468368 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-b4fd7664d-fqkmq" Feb 02 17:31:59 crc kubenswrapper[4858]: I0202 17:31:59.486853 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-786777f949-6vglz" podStartSLOduration=3.635990625 podStartE2EDuration="6.486832102s" podCreationTimestamp="2026-02-02 17:31:53 +0000 UTC" firstStartedPulling="2026-02-02 17:31:55.138696582 +0000 UTC m=+1016.291111847" lastFinishedPulling="2026-02-02 17:31:57.989538059 +0000 UTC m=+1019.141953324" observedRunningTime="2026-02-02 17:31:59.477348131 +0000 UTC m=+1020.629763416" watchObservedRunningTime="2026-02-02 17:31:59.486832102 +0000 UTC m=+1020.639247357" Feb 02 17:31:59 crc kubenswrapper[4858]: I0202 17:31:59.498372 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5966976bc8-gg2wx" podStartSLOduration=3.681910159 podStartE2EDuration="6.498352312s" podCreationTimestamp="2026-02-02 17:31:53 +0000 UTC" firstStartedPulling="2026-02-02 17:31:55.144819627 +0000 UTC m=+1016.297234892" lastFinishedPulling="2026-02-02 17:31:57.96126178 +0000 UTC m=+1019.113677045" observedRunningTime="2026-02-02 17:31:59.497528738 +0000 UTC m=+1020.649944023" watchObservedRunningTime="2026-02-02 17:31:59.498352312 +0000 UTC m=+1020.650767577" Feb 02 17:31:59 crc kubenswrapper[4858]: I0202 17:31:59.538519 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-749df8c57d-rd7dc" podStartSLOduration=3.637505049 podStartE2EDuration="6.538496179s" podCreationTimestamp="2026-02-02 17:31:53 +0000 UTC" firstStartedPulling="2026-02-02 17:31:55.106166012 +0000 UTC m=+1016.258581277" lastFinishedPulling="2026-02-02 17:31:58.007157142 +0000 UTC m=+1019.159572407" observedRunningTime="2026-02-02 17:31:59.526395093 +0000 UTC m=+1020.678810348" watchObservedRunningTime="2026-02-02 17:31:59.538496179 +0000 UTC m=+1020.690911444" Feb 02 17:31:59 crc kubenswrapper[4858]: I0202 17:31:59.594009 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-8b9496955-6bsmq" podStartSLOduration=3.996397039 podStartE2EDuration="6.593987736s" podCreationTimestamp="2026-02-02 17:31:53 +0000 UTC" firstStartedPulling="2026-02-02 17:31:55.409619747 +0000 UTC m=+1016.562035012" lastFinishedPulling="2026-02-02 17:31:58.007210434 +0000 UTC m=+1019.159625709" observedRunningTime="2026-02-02 17:31:59.574367275 +0000 UTC m=+1020.726782530" watchObservedRunningTime="2026-02-02 17:31:59.593987736 +0000 UTC m=+1020.746403021" Feb 02 17:31:59 crc kubenswrapper[4858]: I0202 17:31:59.594278 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-5966976bc8-gg2wx"] Feb 02 17:31:59 crc kubenswrapper[4858]: I0202 17:31:59.625263 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-zst5s" podStartSLOduration=6.625240079 podStartE2EDuration="6.625240079s" podCreationTimestamp="2026-02-02 17:31:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:31:59.595085817 +0000 UTC m=+1020.747501082" watchObservedRunningTime="2026-02-02 17:31:59.625240079 +0000 UTC m=+1020.777655344" Feb 02 17:31:59 crc kubenswrapper[4858]: I0202 17:31:59.639685 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-786777f949-6vglz"] Feb 02 17:31:59 crc kubenswrapper[4858]: I0202 17:31:59.649860 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-b4fd7664d-fqkmq" podStartSLOduration=3.649830612 podStartE2EDuration="3.649830612s" podCreationTimestamp="2026-02-02 17:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:31:59.620816813 +0000 UTC m=+1020.773232088" watchObservedRunningTime="2026-02-02 17:31:59.649830612 +0000 UTC m=+1020.802245887" Feb 02 17:32:00 crc kubenswrapper[4858]: I0202 17:32:00.225521 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-857c87669d-c45h7" podUID="24d5a090-abc7-4832-b6c6-2e36edf7d82e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Feb 02 17:32:00 crc kubenswrapper[4858]: I0202 17:32:00.249271 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 02 17:32:00 crc kubenswrapper[4858]: I0202 17:32:00.249338 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 02 17:32:00 crc kubenswrapper[4858]: I0202 17:32:00.323112 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 02 17:32:00 crc kubenswrapper[4858]: I0202 17:32:00.330219 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 02 17:32:00 crc kubenswrapper[4858]: I0202 17:32:00.330310 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 17:32:00 crc kubenswrapper[4858]: I0202 17:32:00.330937 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 02 17:32:00 crc kubenswrapper[4858]: I0202 17:32:00.351069 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-68f4b57796-rhdnw" podUID="4a208969-437b-449b-ba53-89364175a52a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Feb 02 17:32:00 crc kubenswrapper[4858]: I0202 17:32:00.365000 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 02 17:32:00 crc kubenswrapper[4858]: I0202 17:32:00.495532 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 02 17:32:00 crc kubenswrapper[4858]: I0202 17:32:00.495608 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 02 17:32:01 crc kubenswrapper[4858]: I0202 17:32:01.496270 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-5966976bc8-gg2wx" podUID="f82a4143-d161-4bab-ba3a-bc426cfe53b9" containerName="barbican-keystone-listener-log" containerID="cri-o://d354ad67c3b81d0d823086864b4aed743814d4c4a5355d8b9060b57e865d3a94" gracePeriod=30 Feb 02 17:32:01 crc kubenswrapper[4858]: I0202 17:32:01.496552 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-5966976bc8-gg2wx" podUID="f82a4143-d161-4bab-ba3a-bc426cfe53b9" containerName="barbican-keystone-listener" containerID="cri-o://e8dc681d14696363361bca939cdaa80210833cd991353d34830952e0e7343cfd" gracePeriod=30 Feb 02 17:32:01 crc kubenswrapper[4858]: I0202 17:32:01.496553 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-786777f949-6vglz" podUID="f98781ea-94a7-4f21-975e-4a8d48fad122" containerName="barbican-worker-log" containerID="cri-o://984b6c054fb790a1b9581084f781ecaece762c78af5fb02b0c9c46a0b92b75af" gracePeriod=30 Feb 02 17:32:01 crc kubenswrapper[4858]: I0202 17:32:01.496578 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-786777f949-6vglz" podUID="f98781ea-94a7-4f21-975e-4a8d48fad122" containerName="barbican-worker" containerID="cri-o://1e3575ed04b22b4e0374f08cedb28cf00e6eeca424bf3e96ac9f6b3af36a7a04" gracePeriod=30 Feb 02 17:32:02 crc kubenswrapper[4858]: I0202 17:32:02.520715 4858 generic.go:334] "Generic (PLEG): container finished" podID="f82a4143-d161-4bab-ba3a-bc426cfe53b9" containerID="e8dc681d14696363361bca939cdaa80210833cd991353d34830952e0e7343cfd" exitCode=0 Feb 02 17:32:02 crc kubenswrapper[4858]: I0202 17:32:02.521028 4858 generic.go:334] "Generic (PLEG): container finished" podID="f82a4143-d161-4bab-ba3a-bc426cfe53b9" containerID="d354ad67c3b81d0d823086864b4aed743814d4c4a5355d8b9060b57e865d3a94" exitCode=143 Feb 02 17:32:02 crc kubenswrapper[4858]: I0202 17:32:02.520856 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5966976bc8-gg2wx" event={"ID":"f82a4143-d161-4bab-ba3a-bc426cfe53b9","Type":"ContainerDied","Data":"e8dc681d14696363361bca939cdaa80210833cd991353d34830952e0e7343cfd"} Feb 02 17:32:02 crc kubenswrapper[4858]: I0202 17:32:02.521107 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5966976bc8-gg2wx" event={"ID":"f82a4143-d161-4bab-ba3a-bc426cfe53b9","Type":"ContainerDied","Data":"d354ad67c3b81d0d823086864b4aed743814d4c4a5355d8b9060b57e865d3a94"} Feb 02 17:32:02 crc kubenswrapper[4858]: I0202 17:32:02.524991 4858 generic.go:334] "Generic (PLEG): container finished" podID="f98781ea-94a7-4f21-975e-4a8d48fad122" containerID="1e3575ed04b22b4e0374f08cedb28cf00e6eeca424bf3e96ac9f6b3af36a7a04" exitCode=0 Feb 02 17:32:02 crc kubenswrapper[4858]: I0202 17:32:02.525020 4858 generic.go:334] "Generic (PLEG): container finished" podID="f98781ea-94a7-4f21-975e-4a8d48fad122" containerID="984b6c054fb790a1b9581084f781ecaece762c78af5fb02b0c9c46a0b92b75af" exitCode=143 Feb 02 17:32:02 crc kubenswrapper[4858]: I0202 17:32:02.525066 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-786777f949-6vglz" event={"ID":"f98781ea-94a7-4f21-975e-4a8d48fad122","Type":"ContainerDied","Data":"1e3575ed04b22b4e0374f08cedb28cf00e6eeca424bf3e96ac9f6b3af36a7a04"} Feb 02 17:32:02 crc kubenswrapper[4858]: I0202 17:32:02.525106 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-786777f949-6vglz" event={"ID":"f98781ea-94a7-4f21-975e-4a8d48fad122","Type":"ContainerDied","Data":"984b6c054fb790a1b9581084f781ecaece762c78af5fb02b0c9c46a0b92b75af"} Feb 02 17:32:02 crc kubenswrapper[4858]: I0202 17:32:02.525080 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 17:32:02 crc kubenswrapper[4858]: I0202 17:32:02.525127 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 17:32:02 crc kubenswrapper[4858]: I0202 17:32:02.732758 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 02 17:32:02 crc kubenswrapper[4858]: I0202 17:32:02.736466 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 02 17:32:03 crc kubenswrapper[4858]: I0202 17:32:03.538951 4858 generic.go:334] "Generic (PLEG): container finished" podID="d5a9fadc-338f-44bb-8ebd-bc4fe01972bf" containerID="ad3d10a4a57038497de79a193804b1dd2faaed6dba90621eadb67952f0f1019a" exitCode=0 Feb 02 17:32:03 crc kubenswrapper[4858]: I0202 17:32:03.540014 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6xt8q" event={"ID":"d5a9fadc-338f-44bb-8ebd-bc4fe01972bf","Type":"ContainerDied","Data":"ad3d10a4a57038497de79a193804b1dd2faaed6dba90621eadb67952f0f1019a"} Feb 02 17:32:04 crc kubenswrapper[4858]: I0202 17:32:04.331804 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85ff748b95-zst5s" Feb 02 17:32:04 crc kubenswrapper[4858]: I0202 17:32:04.442380 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-x2rtg"] Feb 02 17:32:04 crc kubenswrapper[4858]: I0202 17:32:04.442615 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-x2rtg" podUID="a85b3eb5-0945-45b7-875b-e20c2c0e29f7" containerName="dnsmasq-dns" containerID="cri-o://9064ecb9ffc75e6be7cdfc0cb9f8e4662fcc85336316095459f8aa3519d0de96" gracePeriod=10 Feb 02 17:32:05 crc kubenswrapper[4858]: I0202 17:32:05.591118 4858 generic.go:334] "Generic (PLEG): container finished" podID="a85b3eb5-0945-45b7-875b-e20c2c0e29f7" containerID="9064ecb9ffc75e6be7cdfc0cb9f8e4662fcc85336316095459f8aa3519d0de96" exitCode=0 Feb 02 17:32:05 crc kubenswrapper[4858]: I0202 17:32:05.591234 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-x2rtg" event={"ID":"a85b3eb5-0945-45b7-875b-e20c2c0e29f7","Type":"ContainerDied","Data":"9064ecb9ffc75e6be7cdfc0cb9f8e4662fcc85336316095459f8aa3519d0de96"} Feb 02 17:32:06 crc kubenswrapper[4858]: I0202 17:32:06.517768 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-x2rtg" podUID="a85b3eb5-0945-45b7-875b-e20c2c0e29f7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.142:5353: connect: connection refused" Feb 02 17:32:06 crc kubenswrapper[4858]: I0202 17:32:06.518086 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-75f97b655d-lv8wc" Feb 02 17:32:06 crc kubenswrapper[4858]: I0202 17:32:06.582619 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-75f97b655d-lv8wc" Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.368754 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6xt8q" Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.458911 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d5a9fadc-338f-44bb-8ebd-bc4fe01972bf-etc-machine-id\") pod \"d5a9fadc-338f-44bb-8ebd-bc4fe01972bf\" (UID: \"d5a9fadc-338f-44bb-8ebd-bc4fe01972bf\") " Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.459365 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5a9fadc-338f-44bb-8ebd-bc4fe01972bf-combined-ca-bundle\") pod \"d5a9fadc-338f-44bb-8ebd-bc4fe01972bf\" (UID: \"d5a9fadc-338f-44bb-8ebd-bc4fe01972bf\") " Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.459394 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5a9fadc-338f-44bb-8ebd-bc4fe01972bf-config-data\") pod \"d5a9fadc-338f-44bb-8ebd-bc4fe01972bf\" (UID: \"d5a9fadc-338f-44bb-8ebd-bc4fe01972bf\") " Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.459454 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5a9fadc-338f-44bb-8ebd-bc4fe01972bf-scripts\") pod \"d5a9fadc-338f-44bb-8ebd-bc4fe01972bf\" (UID: \"d5a9fadc-338f-44bb-8ebd-bc4fe01972bf\") " Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.459471 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjfgs\" (UniqueName: \"kubernetes.io/projected/d5a9fadc-338f-44bb-8ebd-bc4fe01972bf-kube-api-access-fjfgs\") pod \"d5a9fadc-338f-44bb-8ebd-bc4fe01972bf\" (UID: \"d5a9fadc-338f-44bb-8ebd-bc4fe01972bf\") " Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.459546 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d5a9fadc-338f-44bb-8ebd-bc4fe01972bf-db-sync-config-data\") pod \"d5a9fadc-338f-44bb-8ebd-bc4fe01972bf\" (UID: \"d5a9fadc-338f-44bb-8ebd-bc4fe01972bf\") " Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.461082 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5a9fadc-338f-44bb-8ebd-bc4fe01972bf-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d5a9fadc-338f-44bb-8ebd-bc4fe01972bf" (UID: "d5a9fadc-338f-44bb-8ebd-bc4fe01972bf"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.461425 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5966976bc8-gg2wx" Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.474182 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5a9fadc-338f-44bb-8ebd-bc4fe01972bf-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d5a9fadc-338f-44bb-8ebd-bc4fe01972bf" (UID: "d5a9fadc-338f-44bb-8ebd-bc4fe01972bf"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.495936 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5a9fadc-338f-44bb-8ebd-bc4fe01972bf-kube-api-access-fjfgs" (OuterVolumeSpecName: "kube-api-access-fjfgs") pod "d5a9fadc-338f-44bb-8ebd-bc4fe01972bf" (UID: "d5a9fadc-338f-44bb-8ebd-bc4fe01972bf"). InnerVolumeSpecName "kube-api-access-fjfgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.517307 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5a9fadc-338f-44bb-8ebd-bc4fe01972bf-scripts" (OuterVolumeSpecName: "scripts") pod "d5a9fadc-338f-44bb-8ebd-bc4fe01972bf" (UID: "d5a9fadc-338f-44bb-8ebd-bc4fe01972bf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.560562 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f82a4143-d161-4bab-ba3a-bc426cfe53b9-config-data\") pod \"f82a4143-d161-4bab-ba3a-bc426cfe53b9\" (UID: \"f82a4143-d161-4bab-ba3a-bc426cfe53b9\") " Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.560831 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f82a4143-d161-4bab-ba3a-bc426cfe53b9-logs\") pod \"f82a4143-d161-4bab-ba3a-bc426cfe53b9\" (UID: \"f82a4143-d161-4bab-ba3a-bc426cfe53b9\") " Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.560855 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcb2z\" (UniqueName: \"kubernetes.io/projected/f82a4143-d161-4bab-ba3a-bc426cfe53b9-kube-api-access-bcb2z\") pod \"f82a4143-d161-4bab-ba3a-bc426cfe53b9\" (UID: \"f82a4143-d161-4bab-ba3a-bc426cfe53b9\") " Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.560874 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f82a4143-d161-4bab-ba3a-bc426cfe53b9-config-data-custom\") pod \"f82a4143-d161-4bab-ba3a-bc426cfe53b9\" (UID: \"f82a4143-d161-4bab-ba3a-bc426cfe53b9\") " Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.560893 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f82a4143-d161-4bab-ba3a-bc426cfe53b9-combined-ca-bundle\") pod \"f82a4143-d161-4bab-ba3a-bc426cfe53b9\" (UID: \"f82a4143-d161-4bab-ba3a-bc426cfe53b9\") " Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.561298 4858 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d5a9fadc-338f-44bb-8ebd-bc4fe01972bf-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.561317 4858 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d5a9fadc-338f-44bb-8ebd-bc4fe01972bf-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.561326 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5a9fadc-338f-44bb-8ebd-bc4fe01972bf-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.561336 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjfgs\" (UniqueName: \"kubernetes.io/projected/d5a9fadc-338f-44bb-8ebd-bc4fe01972bf-kube-api-access-fjfgs\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.562571 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f82a4143-d161-4bab-ba3a-bc426cfe53b9-logs" (OuterVolumeSpecName: "logs") pod "f82a4143-d161-4bab-ba3a-bc426cfe53b9" (UID: "f82a4143-d161-4bab-ba3a-bc426cfe53b9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.574514 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f82a4143-d161-4bab-ba3a-bc426cfe53b9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f82a4143-d161-4bab-ba3a-bc426cfe53b9" (UID: "f82a4143-d161-4bab-ba3a-bc426cfe53b9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.578832 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5a9fadc-338f-44bb-8ebd-bc4fe01972bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5a9fadc-338f-44bb-8ebd-bc4fe01972bf" (UID: "d5a9fadc-338f-44bb-8ebd-bc4fe01972bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.579158 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f82a4143-d161-4bab-ba3a-bc426cfe53b9-kube-api-access-bcb2z" (OuterVolumeSpecName: "kube-api-access-bcb2z") pod "f82a4143-d161-4bab-ba3a-bc426cfe53b9" (UID: "f82a4143-d161-4bab-ba3a-bc426cfe53b9"). InnerVolumeSpecName "kube-api-access-bcb2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.594179 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f82a4143-d161-4bab-ba3a-bc426cfe53b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f82a4143-d161-4bab-ba3a-bc426cfe53b9" (UID: "f82a4143-d161-4bab-ba3a-bc426cfe53b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.596450 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5a9fadc-338f-44bb-8ebd-bc4fe01972bf-config-data" (OuterVolumeSpecName: "config-data") pod "d5a9fadc-338f-44bb-8ebd-bc4fe01972bf" (UID: "d5a9fadc-338f-44bb-8ebd-bc4fe01972bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.612057 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f82a4143-d161-4bab-ba3a-bc426cfe53b9-config-data" (OuterVolumeSpecName: "config-data") pod "f82a4143-d161-4bab-ba3a-bc426cfe53b9" (UID: "f82a4143-d161-4bab-ba3a-bc426cfe53b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.650471 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5966976bc8-gg2wx" Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.651326 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5966976bc8-gg2wx" event={"ID":"f82a4143-d161-4bab-ba3a-bc426cfe53b9","Type":"ContainerDied","Data":"4856a30a14a00e3573cf2746cede5b443a0763c9768cecf59d9d1dbc975df9a8"} Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.651384 4858 scope.go:117] "RemoveContainer" containerID="e8dc681d14696363361bca939cdaa80210833cd991353d34830952e0e7343cfd" Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.665221 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6xt8q" event={"ID":"d5a9fadc-338f-44bb-8ebd-bc4fe01972bf","Type":"ContainerDied","Data":"daab15a97c73c6488f62e69272c2e17d3869233847bdb61d152cb0cb95a75b80"} Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.665271 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="daab15a97c73c6488f62e69272c2e17d3869233847bdb61d152cb0cb95a75b80" Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.665337 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6xt8q" Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.669395 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f82a4143-d161-4bab-ba3a-bc426cfe53b9-logs\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.675343 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcb2z\" (UniqueName: \"kubernetes.io/projected/f82a4143-d161-4bab-ba3a-bc426cfe53b9-kube-api-access-bcb2z\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.675525 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f82a4143-d161-4bab-ba3a-bc426cfe53b9-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.675608 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f82a4143-d161-4bab-ba3a-bc426cfe53b9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.675693 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f82a4143-d161-4bab-ba3a-bc426cfe53b9-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.675792 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5a9fadc-338f-44bb-8ebd-bc4fe01972bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.675865 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5a9fadc-338f-44bb-8ebd-bc4fe01972bf-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.709508 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-5966976bc8-gg2wx"] Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.740417 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-786777f949-6vglz" Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.743555 4858 scope.go:117] "RemoveContainer" containerID="d354ad67c3b81d0d823086864b4aed743814d4c4a5355d8b9060b57e865d3a94" Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.748629 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-5966976bc8-gg2wx"] Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.878900 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8q96\" (UniqueName: \"kubernetes.io/projected/f98781ea-94a7-4f21-975e-4a8d48fad122-kube-api-access-p8q96\") pod \"f98781ea-94a7-4f21-975e-4a8d48fad122\" (UID: \"f98781ea-94a7-4f21-975e-4a8d48fad122\") " Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.879050 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f98781ea-94a7-4f21-975e-4a8d48fad122-combined-ca-bundle\") pod \"f98781ea-94a7-4f21-975e-4a8d48fad122\" (UID: \"f98781ea-94a7-4f21-975e-4a8d48fad122\") " Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.879136 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f98781ea-94a7-4f21-975e-4a8d48fad122-logs\") pod \"f98781ea-94a7-4f21-975e-4a8d48fad122\" (UID: \"f98781ea-94a7-4f21-975e-4a8d48fad122\") " Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.879202 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f98781ea-94a7-4f21-975e-4a8d48fad122-config-data\") pod \"f98781ea-94a7-4f21-975e-4a8d48fad122\" (UID: \"f98781ea-94a7-4f21-975e-4a8d48fad122\") " Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.879229 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f98781ea-94a7-4f21-975e-4a8d48fad122-config-data-custom\") pod \"f98781ea-94a7-4f21-975e-4a8d48fad122\" (UID: \"f98781ea-94a7-4f21-975e-4a8d48fad122\") " Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.879312 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-x2rtg" Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.880680 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f98781ea-94a7-4f21-975e-4a8d48fad122-logs" (OuterVolumeSpecName: "logs") pod "f98781ea-94a7-4f21-975e-4a8d48fad122" (UID: "f98781ea-94a7-4f21-975e-4a8d48fad122"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.916102 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f98781ea-94a7-4f21-975e-4a8d48fad122-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f98781ea-94a7-4f21-975e-4a8d48fad122" (UID: "f98781ea-94a7-4f21-975e-4a8d48fad122"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.931521 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f98781ea-94a7-4f21-975e-4a8d48fad122-kube-api-access-p8q96" (OuterVolumeSpecName: "kube-api-access-p8q96") pod "f98781ea-94a7-4f21-975e-4a8d48fad122" (UID: "f98781ea-94a7-4f21-975e-4a8d48fad122"). InnerVolumeSpecName "kube-api-access-p8q96". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.980499 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnhgt\" (UniqueName: \"kubernetes.io/projected/a85b3eb5-0945-45b7-875b-e20c2c0e29f7-kube-api-access-bnhgt\") pod \"a85b3eb5-0945-45b7-875b-e20c2c0e29f7\" (UID: \"a85b3eb5-0945-45b7-875b-e20c2c0e29f7\") " Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.980552 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a85b3eb5-0945-45b7-875b-e20c2c0e29f7-ovsdbserver-sb\") pod \"a85b3eb5-0945-45b7-875b-e20c2c0e29f7\" (UID: \"a85b3eb5-0945-45b7-875b-e20c2c0e29f7\") " Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.980615 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a85b3eb5-0945-45b7-875b-e20c2c0e29f7-dns-svc\") pod \"a85b3eb5-0945-45b7-875b-e20c2c0e29f7\" (UID: \"a85b3eb5-0945-45b7-875b-e20c2c0e29f7\") " Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.980645 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a85b3eb5-0945-45b7-875b-e20c2c0e29f7-ovsdbserver-nb\") pod \"a85b3eb5-0945-45b7-875b-e20c2c0e29f7\" (UID: \"a85b3eb5-0945-45b7-875b-e20c2c0e29f7\") " Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.980669 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a85b3eb5-0945-45b7-875b-e20c2c0e29f7-dns-swift-storage-0\") pod \"a85b3eb5-0945-45b7-875b-e20c2c0e29f7\" (UID: \"a85b3eb5-0945-45b7-875b-e20c2c0e29f7\") " Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.980715 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a85b3eb5-0945-45b7-875b-e20c2c0e29f7-config\") pod \"a85b3eb5-0945-45b7-875b-e20c2c0e29f7\" (UID: \"a85b3eb5-0945-45b7-875b-e20c2c0e29f7\") " Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.981207 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8q96\" (UniqueName: \"kubernetes.io/projected/f98781ea-94a7-4f21-975e-4a8d48fad122-kube-api-access-p8q96\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.981226 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f98781ea-94a7-4f21-975e-4a8d48fad122-logs\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.981235 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f98781ea-94a7-4f21-975e-4a8d48fad122-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.984117 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a85b3eb5-0945-45b7-875b-e20c2c0e29f7-kube-api-access-bnhgt" (OuterVolumeSpecName: "kube-api-access-bnhgt") pod "a85b3eb5-0945-45b7-875b-e20c2c0e29f7" (UID: "a85b3eb5-0945-45b7-875b-e20c2c0e29f7"). InnerVolumeSpecName "kube-api-access-bnhgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:32:07 crc kubenswrapper[4858]: I0202 17:32:07.992646 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f98781ea-94a7-4f21-975e-4a8d48fad122-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f98781ea-94a7-4f21-975e-4a8d48fad122" (UID: "f98781ea-94a7-4f21-975e-4a8d48fad122"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:08 crc kubenswrapper[4858]: E0202 17:32:08.056441 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="20f4c2e0-6bac-4c5a-affd-48f2d8301111" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.062084 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f98781ea-94a7-4f21-975e-4a8d48fad122-config-data" (OuterVolumeSpecName: "config-data") pod "f98781ea-94a7-4f21-975e-4a8d48fad122" (UID: "f98781ea-94a7-4f21-975e-4a8d48fad122"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.082993 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f98781ea-94a7-4f21-975e-4a8d48fad122-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.083025 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f98781ea-94a7-4f21-975e-4a8d48fad122-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.083039 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnhgt\" (UniqueName: \"kubernetes.io/projected/a85b3eb5-0945-45b7-875b-e20c2c0e29f7-kube-api-access-bnhgt\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.094884 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a85b3eb5-0945-45b7-875b-e20c2c0e29f7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a85b3eb5-0945-45b7-875b-e20c2c0e29f7" (UID: "a85b3eb5-0945-45b7-875b-e20c2c0e29f7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.104596 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a85b3eb5-0945-45b7-875b-e20c2c0e29f7-config" (OuterVolumeSpecName: "config") pod "a85b3eb5-0945-45b7-875b-e20c2c0e29f7" (UID: "a85b3eb5-0945-45b7-875b-e20c2c0e29f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.113521 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a85b3eb5-0945-45b7-875b-e20c2c0e29f7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a85b3eb5-0945-45b7-875b-e20c2c0e29f7" (UID: "a85b3eb5-0945-45b7-875b-e20c2c0e29f7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.118229 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a85b3eb5-0945-45b7-875b-e20c2c0e29f7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a85b3eb5-0945-45b7-875b-e20c2c0e29f7" (UID: "a85b3eb5-0945-45b7-875b-e20c2c0e29f7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.121158 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a85b3eb5-0945-45b7-875b-e20c2c0e29f7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a85b3eb5-0945-45b7-875b-e20c2c0e29f7" (UID: "a85b3eb5-0945-45b7-875b-e20c2c0e29f7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.185115 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a85b3eb5-0945-45b7-875b-e20c2c0e29f7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.185148 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a85b3eb5-0945-45b7-875b-e20c2c0e29f7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.185160 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a85b3eb5-0945-45b7-875b-e20c2c0e29f7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.185169 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a85b3eb5-0945-45b7-875b-e20c2c0e29f7-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.185177 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a85b3eb5-0945-45b7-875b-e20c2c0e29f7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.409849 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f82a4143-d161-4bab-ba3a-bc426cfe53b9" path="/var/lib/kubelet/pods/f82a4143-d161-4bab-ba3a-bc426cfe53b9/volumes" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.695279 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-786777f949-6vglz" event={"ID":"f98781ea-94a7-4f21-975e-4a8d48fad122","Type":"ContainerDied","Data":"09a0f28cff511802cd5eed955ec5bcbde7f110a1241b90fc7f4dfc6aa73b78c6"} Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.695332 4858 scope.go:117] "RemoveContainer" containerID="1e3575ed04b22b4e0374f08cedb28cf00e6eeca424bf3e96ac9f6b3af36a7a04" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.695477 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-786777f949-6vglz" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.715924 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 17:32:08 crc kubenswrapper[4858]: E0202 17:32:08.716306 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f82a4143-d161-4bab-ba3a-bc426cfe53b9" containerName="barbican-keystone-listener-log" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.716319 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f82a4143-d161-4bab-ba3a-bc426cfe53b9" containerName="barbican-keystone-listener-log" Feb 02 17:32:08 crc kubenswrapper[4858]: E0202 17:32:08.716330 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a85b3eb5-0945-45b7-875b-e20c2c0e29f7" containerName="dnsmasq-dns" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.716336 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a85b3eb5-0945-45b7-875b-e20c2c0e29f7" containerName="dnsmasq-dns" Feb 02 17:32:08 crc kubenswrapper[4858]: E0202 17:32:08.716346 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f98781ea-94a7-4f21-975e-4a8d48fad122" containerName="barbican-worker-log" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.716352 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f98781ea-94a7-4f21-975e-4a8d48fad122" containerName="barbican-worker-log" Feb 02 17:32:08 crc kubenswrapper[4858]: E0202 17:32:08.716373 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f82a4143-d161-4bab-ba3a-bc426cfe53b9" containerName="barbican-keystone-listener" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.716378 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f82a4143-d161-4bab-ba3a-bc426cfe53b9" containerName="barbican-keystone-listener" Feb 02 17:32:08 crc kubenswrapper[4858]: E0202 17:32:08.716388 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a85b3eb5-0945-45b7-875b-e20c2c0e29f7" containerName="init" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.716393 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a85b3eb5-0945-45b7-875b-e20c2c0e29f7" containerName="init" Feb 02 17:32:08 crc kubenswrapper[4858]: E0202 17:32:08.716402 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f98781ea-94a7-4f21-975e-4a8d48fad122" containerName="barbican-worker" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.716409 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f98781ea-94a7-4f21-975e-4a8d48fad122" containerName="barbican-worker" Feb 02 17:32:08 crc kubenswrapper[4858]: E0202 17:32:08.716419 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5a9fadc-338f-44bb-8ebd-bc4fe01972bf" containerName="cinder-db-sync" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.716425 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5a9fadc-338f-44bb-8ebd-bc4fe01972bf" containerName="cinder-db-sync" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.716587 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f82a4143-d161-4bab-ba3a-bc426cfe53b9" containerName="barbican-keystone-listener" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.716598 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f98781ea-94a7-4f21-975e-4a8d48fad122" containerName="barbican-worker" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.716617 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f98781ea-94a7-4f21-975e-4a8d48fad122" containerName="barbican-worker-log" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.716626 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f82a4143-d161-4bab-ba3a-bc426cfe53b9" containerName="barbican-keystone-listener-log" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.716645 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a85b3eb5-0945-45b7-875b-e20c2c0e29f7" containerName="dnsmasq-dns" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.716659 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5a9fadc-338f-44bb-8ebd-bc4fe01972bf" containerName="cinder-db-sync" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.717573 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.722762 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.722910 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.723019 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.723519 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-vxpsf" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.734590 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.734927 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f4c2e0-6bac-4c5a-affd-48f2d8301111","Type":"ContainerStarted","Data":"f740f8acdcfb5956b56966e7bdcf5b795f07aad95bc399ab0f3a3f5fc78e4c5d"} Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.735146 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20f4c2e0-6bac-4c5a-affd-48f2d8301111" containerName="ceilometer-notification-agent" containerID="cri-o://9b71df5e3093339ed27494207b4e8407bc23ffb89e50cc0d982bb84e06fecfe1" gracePeriod=30 Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.735454 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.735507 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20f4c2e0-6bac-4c5a-affd-48f2d8301111" containerName="proxy-httpd" containerID="cri-o://f740f8acdcfb5956b56966e7bdcf5b795f07aad95bc399ab0f3a3f5fc78e4c5d" gracePeriod=30 Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.735572 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20f4c2e0-6bac-4c5a-affd-48f2d8301111" containerName="sg-core" containerID="cri-o://ef780359f3f63d18376f8ff91714675ffb9b0bfe8b28f2f8ebfce32c385ba9e2" gracePeriod=30 Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.754165 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-786777f949-6vglz"] Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.768565 4858 scope.go:117] "RemoveContainer" containerID="984b6c054fb790a1b9581084f781ecaece762c78af5fb02b0c9c46a0b92b75af" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.772250 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-786777f949-6vglz"] Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.783087 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-x2rtg" event={"ID":"a85b3eb5-0945-45b7-875b-e20c2c0e29f7","Type":"ContainerDied","Data":"9d26e942381568751f95b01992ae9e1f51e5f64cc97829f654707063346dd694"} Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.783332 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-x2rtg" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.795918 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3-config-data\") pod \"cinder-scheduler-0\" (UID: \"d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3\") " pod="openstack/cinder-scheduler-0" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.796002 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3\") " pod="openstack/cinder-scheduler-0" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.796040 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3\") " pod="openstack/cinder-scheduler-0" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.796145 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxhdl\" (UniqueName: \"kubernetes.io/projected/d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3-kube-api-access-zxhdl\") pod \"cinder-scheduler-0\" (UID: \"d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3\") " pod="openstack/cinder-scheduler-0" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.796191 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3\") " pod="openstack/cinder-scheduler-0" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.796232 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3-scripts\") pod \"cinder-scheduler-0\" (UID: \"d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3\") " pod="openstack/cinder-scheduler-0" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.833494 4858 scope.go:117] "RemoveContainer" containerID="9064ecb9ffc75e6be7cdfc0cb9f8e4662fcc85336316095459f8aa3519d0de96" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.851169 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-rk27f"] Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.852719 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-rk27f" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.887682 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-rk27f"] Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.889077 4858 scope.go:117] "RemoveContainer" containerID="564fe604ab337a5d5ab3f10a2ca26b080451c39a773bb145a019cb89c5ba723a" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.899907 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxhdl\" (UniqueName: \"kubernetes.io/projected/d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3-kube-api-access-zxhdl\") pod \"cinder-scheduler-0\" (UID: \"d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3\") " pod="openstack/cinder-scheduler-0" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.900000 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3\") " pod="openstack/cinder-scheduler-0" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.900039 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be2cff80-fb1c-4421-a892-a140ab4e7dec-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-rk27f\" (UID: \"be2cff80-fb1c-4421-a892-a140ab4e7dec\") " pod="openstack/dnsmasq-dns-5c9776ccc5-rk27f" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.900064 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3-scripts\") pod \"cinder-scheduler-0\" (UID: \"d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3\") " pod="openstack/cinder-scheduler-0" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.900083 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be2cff80-fb1c-4421-a892-a140ab4e7dec-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-rk27f\" (UID: \"be2cff80-fb1c-4421-a892-a140ab4e7dec\") " pod="openstack/dnsmasq-dns-5c9776ccc5-rk27f" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.900104 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be2cff80-fb1c-4421-a892-a140ab4e7dec-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-rk27f\" (UID: \"be2cff80-fb1c-4421-a892-a140ab4e7dec\") " pod="openstack/dnsmasq-dns-5c9776ccc5-rk27f" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.900135 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3-config-data\") pod \"cinder-scheduler-0\" (UID: \"d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3\") " pod="openstack/cinder-scheduler-0" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.900162 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxq2r\" (UniqueName: \"kubernetes.io/projected/be2cff80-fb1c-4421-a892-a140ab4e7dec-kube-api-access-zxq2r\") pod \"dnsmasq-dns-5c9776ccc5-rk27f\" (UID: \"be2cff80-fb1c-4421-a892-a140ab4e7dec\") " pod="openstack/dnsmasq-dns-5c9776ccc5-rk27f" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.900179 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be2cff80-fb1c-4421-a892-a140ab4e7dec-config\") pod \"dnsmasq-dns-5c9776ccc5-rk27f\" (UID: \"be2cff80-fb1c-4421-a892-a140ab4e7dec\") " pod="openstack/dnsmasq-dns-5c9776ccc5-rk27f" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.900193 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be2cff80-fb1c-4421-a892-a140ab4e7dec-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-rk27f\" (UID: \"be2cff80-fb1c-4421-a892-a140ab4e7dec\") " pod="openstack/dnsmasq-dns-5c9776ccc5-rk27f" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.900212 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3\") " pod="openstack/cinder-scheduler-0" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.900237 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3\") " pod="openstack/cinder-scheduler-0" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.901224 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3\") " pod="openstack/cinder-scheduler-0" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.917051 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3-config-data\") pod \"cinder-scheduler-0\" (UID: \"d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3\") " pod="openstack/cinder-scheduler-0" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.919890 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3\") " pod="openstack/cinder-scheduler-0" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.923046 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3-scripts\") pod \"cinder-scheduler-0\" (UID: \"d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3\") " pod="openstack/cinder-scheduler-0" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.925923 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3\") " pod="openstack/cinder-scheduler-0" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.928574 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxhdl\" (UniqueName: \"kubernetes.io/projected/d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3-kube-api-access-zxhdl\") pod \"cinder-scheduler-0\" (UID: \"d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3\") " pod="openstack/cinder-scheduler-0" Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.940527 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-x2rtg"] Feb 02 17:32:08 crc kubenswrapper[4858]: I0202 17:32:08.963529 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-x2rtg"] Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.002178 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxq2r\" (UniqueName: \"kubernetes.io/projected/be2cff80-fb1c-4421-a892-a140ab4e7dec-kube-api-access-zxq2r\") pod \"dnsmasq-dns-5c9776ccc5-rk27f\" (UID: \"be2cff80-fb1c-4421-a892-a140ab4e7dec\") " pod="openstack/dnsmasq-dns-5c9776ccc5-rk27f" Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.002405 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be2cff80-fb1c-4421-a892-a140ab4e7dec-config\") pod \"dnsmasq-dns-5c9776ccc5-rk27f\" (UID: \"be2cff80-fb1c-4421-a892-a140ab4e7dec\") " pod="openstack/dnsmasq-dns-5c9776ccc5-rk27f" Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.002483 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be2cff80-fb1c-4421-a892-a140ab4e7dec-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-rk27f\" (UID: \"be2cff80-fb1c-4421-a892-a140ab4e7dec\") " pod="openstack/dnsmasq-dns-5c9776ccc5-rk27f" Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.002648 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be2cff80-fb1c-4421-a892-a140ab4e7dec-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-rk27f\" (UID: \"be2cff80-fb1c-4421-a892-a140ab4e7dec\") " pod="openstack/dnsmasq-dns-5c9776ccc5-rk27f" Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.002734 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be2cff80-fb1c-4421-a892-a140ab4e7dec-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-rk27f\" (UID: \"be2cff80-fb1c-4421-a892-a140ab4e7dec\") " pod="openstack/dnsmasq-dns-5c9776ccc5-rk27f" Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.002806 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be2cff80-fb1c-4421-a892-a140ab4e7dec-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-rk27f\" (UID: \"be2cff80-fb1c-4421-a892-a140ab4e7dec\") " pod="openstack/dnsmasq-dns-5c9776ccc5-rk27f" Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.003779 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be2cff80-fb1c-4421-a892-a140ab4e7dec-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-rk27f\" (UID: \"be2cff80-fb1c-4421-a892-a140ab4e7dec\") " pod="openstack/dnsmasq-dns-5c9776ccc5-rk27f" Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.004618 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be2cff80-fb1c-4421-a892-a140ab4e7dec-config\") pod \"dnsmasq-dns-5c9776ccc5-rk27f\" (UID: \"be2cff80-fb1c-4421-a892-a140ab4e7dec\") " pod="openstack/dnsmasq-dns-5c9776ccc5-rk27f" Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.006016 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be2cff80-fb1c-4421-a892-a140ab4e7dec-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-rk27f\" (UID: \"be2cff80-fb1c-4421-a892-a140ab4e7dec\") " pod="openstack/dnsmasq-dns-5c9776ccc5-rk27f" Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.006533 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be2cff80-fb1c-4421-a892-a140ab4e7dec-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-rk27f\" (UID: \"be2cff80-fb1c-4421-a892-a140ab4e7dec\") " pod="openstack/dnsmasq-dns-5c9776ccc5-rk27f" Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.006797 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be2cff80-fb1c-4421-a892-a140ab4e7dec-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-rk27f\" (UID: \"be2cff80-fb1c-4421-a892-a140ab4e7dec\") " pod="openstack/dnsmasq-dns-5c9776ccc5-rk27f" Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.031849 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxq2r\" (UniqueName: \"kubernetes.io/projected/be2cff80-fb1c-4421-a892-a140ab4e7dec-kube-api-access-zxq2r\") pod \"dnsmasq-dns-5c9776ccc5-rk27f\" (UID: \"be2cff80-fb1c-4421-a892-a140ab4e7dec\") " pod="openstack/dnsmasq-dns-5c9776ccc5-rk27f" Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.040079 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.042357 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.045355 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.047588 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-rk27f" Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.077133 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.078638 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.104926 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11466796-476b-4a46-9859-1770359abf01-logs\") pod \"cinder-api-0\" (UID: \"11466796-476b-4a46-9859-1770359abf01\") " pod="openstack/cinder-api-0" Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.104987 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/11466796-476b-4a46-9859-1770359abf01-etc-machine-id\") pod \"cinder-api-0\" (UID: \"11466796-476b-4a46-9859-1770359abf01\") " pod="openstack/cinder-api-0" Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.105007 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11466796-476b-4a46-9859-1770359abf01-scripts\") pod \"cinder-api-0\" (UID: \"11466796-476b-4a46-9859-1770359abf01\") " pod="openstack/cinder-api-0" Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.105050 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11466796-476b-4a46-9859-1770359abf01-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"11466796-476b-4a46-9859-1770359abf01\") " pod="openstack/cinder-api-0" Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.105073 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5bdf\" (UniqueName: \"kubernetes.io/projected/11466796-476b-4a46-9859-1770359abf01-kube-api-access-l5bdf\") pod \"cinder-api-0\" (UID: \"11466796-476b-4a46-9859-1770359abf01\") " pod="openstack/cinder-api-0" Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.105089 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11466796-476b-4a46-9859-1770359abf01-config-data\") pod \"cinder-api-0\" (UID: \"11466796-476b-4a46-9859-1770359abf01\") " pod="openstack/cinder-api-0" Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.105128 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/11466796-476b-4a46-9859-1770359abf01-config-data-custom\") pod \"cinder-api-0\" (UID: \"11466796-476b-4a46-9859-1770359abf01\") " pod="openstack/cinder-api-0" Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.206324 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11466796-476b-4a46-9859-1770359abf01-logs\") pod \"cinder-api-0\" (UID: \"11466796-476b-4a46-9859-1770359abf01\") " pod="openstack/cinder-api-0" Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.206370 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/11466796-476b-4a46-9859-1770359abf01-etc-machine-id\") pod \"cinder-api-0\" (UID: \"11466796-476b-4a46-9859-1770359abf01\") " pod="openstack/cinder-api-0" Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.206390 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11466796-476b-4a46-9859-1770359abf01-scripts\") pod \"cinder-api-0\" (UID: \"11466796-476b-4a46-9859-1770359abf01\") " pod="openstack/cinder-api-0" Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.206429 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11466796-476b-4a46-9859-1770359abf01-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"11466796-476b-4a46-9859-1770359abf01\") " pod="openstack/cinder-api-0" Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.206453 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5bdf\" (UniqueName: \"kubernetes.io/projected/11466796-476b-4a46-9859-1770359abf01-kube-api-access-l5bdf\") pod \"cinder-api-0\" (UID: \"11466796-476b-4a46-9859-1770359abf01\") " pod="openstack/cinder-api-0" Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.206469 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11466796-476b-4a46-9859-1770359abf01-config-data\") pod \"cinder-api-0\" (UID: \"11466796-476b-4a46-9859-1770359abf01\") " pod="openstack/cinder-api-0" Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.206507 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/11466796-476b-4a46-9859-1770359abf01-config-data-custom\") pod \"cinder-api-0\" (UID: \"11466796-476b-4a46-9859-1770359abf01\") " pod="openstack/cinder-api-0" Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.207604 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/11466796-476b-4a46-9859-1770359abf01-etc-machine-id\") pod \"cinder-api-0\" (UID: \"11466796-476b-4a46-9859-1770359abf01\") " pod="openstack/cinder-api-0" Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.212020 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/11466796-476b-4a46-9859-1770359abf01-config-data-custom\") pod \"cinder-api-0\" (UID: \"11466796-476b-4a46-9859-1770359abf01\") " pod="openstack/cinder-api-0" Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.212641 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11466796-476b-4a46-9859-1770359abf01-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"11466796-476b-4a46-9859-1770359abf01\") " pod="openstack/cinder-api-0" Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.215461 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11466796-476b-4a46-9859-1770359abf01-config-data\") pod \"cinder-api-0\" (UID: \"11466796-476b-4a46-9859-1770359abf01\") " pod="openstack/cinder-api-0" Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.216154 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11466796-476b-4a46-9859-1770359abf01-scripts\") pod \"cinder-api-0\" (UID: \"11466796-476b-4a46-9859-1770359abf01\") " pod="openstack/cinder-api-0" Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.216321 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11466796-476b-4a46-9859-1770359abf01-logs\") pod \"cinder-api-0\" (UID: \"11466796-476b-4a46-9859-1770359abf01\") " pod="openstack/cinder-api-0" Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.239911 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5bdf\" (UniqueName: \"kubernetes.io/projected/11466796-476b-4a46-9859-1770359abf01-kube-api-access-l5bdf\") pod \"cinder-api-0\" (UID: \"11466796-476b-4a46-9859-1770359abf01\") " pod="openstack/cinder-api-0" Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.365872 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.760260 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-rk27f"] Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.762668 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-59c6cb6f96-ss676" Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.804036 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-rk27f" event={"ID":"be2cff80-fb1c-4421-a892-a140ab4e7dec","Type":"ContainerStarted","Data":"27fff9b44a00a80472ed16185619d1e4b86bce297f8817cd92b880ed292cf33a"} Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.811172 4858 generic.go:334] "Generic (PLEG): container finished" podID="20f4c2e0-6bac-4c5a-affd-48f2d8301111" containerID="f740f8acdcfb5956b56966e7bdcf5b795f07aad95bc399ab0f3a3f5fc78e4c5d" exitCode=0 Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.811204 4858 generic.go:334] "Generic (PLEG): container finished" podID="20f4c2e0-6bac-4c5a-affd-48f2d8301111" containerID="ef780359f3f63d18376f8ff91714675ffb9b0bfe8b28f2f8ebfce32c385ba9e2" exitCode=2 Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.811248 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f4c2e0-6bac-4c5a-affd-48f2d8301111","Type":"ContainerDied","Data":"f740f8acdcfb5956b56966e7bdcf5b795f07aad95bc399ab0f3a3f5fc78e4c5d"} Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.811276 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f4c2e0-6bac-4c5a-affd-48f2d8301111","Type":"ContainerDied","Data":"ef780359f3f63d18376f8ff91714675ffb9b0bfe8b28f2f8ebfce32c385ba9e2"} Feb 02 17:32:09 crc kubenswrapper[4858]: W0202 17:32:09.889214 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1cbdfbe_d93e_4dcb_bc28_fbff6a59d3f3.slice/crio-a01407aa454a790a0d9867a6d9654bdb6cc2de4358b2ab199d6023ac7915ff2c WatchSource:0}: Error finding container a01407aa454a790a0d9867a6d9654bdb6cc2de4358b2ab199d6023ac7915ff2c: Status 404 returned error can't find the container with id a01407aa454a790a0d9867a6d9654bdb6cc2de4358b2ab199d6023ac7915ff2c Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.891236 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 17:32:09 crc kubenswrapper[4858]: I0202 17:32:09.953937 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-59c6cb6f96-ss676" Feb 02 17:32:10 crc kubenswrapper[4858]: I0202 17:32:10.018520 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-75f97b655d-lv8wc"] Feb 02 17:32:10 crc kubenswrapper[4858]: I0202 17:32:10.022829 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-75f97b655d-lv8wc" podUID="de5f94f8-30f4-4e19-8195-eb6a5b281de9" containerName="barbican-api-log" containerID="cri-o://9cdf417ee38737aebb2ded473bfa0470de4201c406e99c606db9f406d2f66242" gracePeriod=30 Feb 02 17:32:10 crc kubenswrapper[4858]: I0202 17:32:10.023186 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-75f97b655d-lv8wc" podUID="de5f94f8-30f4-4e19-8195-eb6a5b281de9" containerName="barbican-api" containerID="cri-o://540513034c786c00b6429acad3e30eb43b0270445fb7b440a654639682246522" gracePeriod=30 Feb 02 17:32:10 crc kubenswrapper[4858]: I0202 17:32:10.141867 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 17:32:10 crc kubenswrapper[4858]: I0202 17:32:10.221943 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-857c87669d-c45h7" podUID="24d5a090-abc7-4832-b6c6-2e36edf7d82e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Feb 02 17:32:10 crc kubenswrapper[4858]: I0202 17:32:10.356122 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-68f4b57796-rhdnw" podUID="4a208969-437b-449b-ba53-89364175a52a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Feb 02 17:32:10 crc kubenswrapper[4858]: I0202 17:32:10.433260 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a85b3eb5-0945-45b7-875b-e20c2c0e29f7" path="/var/lib/kubelet/pods/a85b3eb5-0945-45b7-875b-e20c2c0e29f7/volumes" Feb 02 17:32:10 crc kubenswrapper[4858]: I0202 17:32:10.434053 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f98781ea-94a7-4f21-975e-4a8d48fad122" path="/var/lib/kubelet/pods/f98781ea-94a7-4f21-975e-4a8d48fad122/volumes" Feb 02 17:32:10 crc kubenswrapper[4858]: I0202 17:32:10.860345 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3","Type":"ContainerStarted","Data":"a01407aa454a790a0d9867a6d9654bdb6cc2de4358b2ab199d6023ac7915ff2c"} Feb 02 17:32:10 crc kubenswrapper[4858]: I0202 17:32:10.869856 4858 generic.go:334] "Generic (PLEG): container finished" podID="de5f94f8-30f4-4e19-8195-eb6a5b281de9" containerID="9cdf417ee38737aebb2ded473bfa0470de4201c406e99c606db9f406d2f66242" exitCode=143 Feb 02 17:32:10 crc kubenswrapper[4858]: I0202 17:32:10.869923 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75f97b655d-lv8wc" event={"ID":"de5f94f8-30f4-4e19-8195-eb6a5b281de9","Type":"ContainerDied","Data":"9cdf417ee38737aebb2ded473bfa0470de4201c406e99c606db9f406d2f66242"} Feb 02 17:32:10 crc kubenswrapper[4858]: I0202 17:32:10.885439 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"11466796-476b-4a46-9859-1770359abf01","Type":"ContainerStarted","Data":"66b27f64d1173c7a483504df6129f8a1b6f1219e369ea4be8569658de91865e5"} Feb 02 17:32:10 crc kubenswrapper[4858]: I0202 17:32:10.900483 4858 generic.go:334] "Generic (PLEG): container finished" podID="be2cff80-fb1c-4421-a892-a140ab4e7dec" containerID="f236f3d73748ec08ad2077af7851ef08099ca6a7584573e89379a5691e3e500c" exitCode=0 Feb 02 17:32:10 crc kubenswrapper[4858]: I0202 17:32:10.900592 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-rk27f" event={"ID":"be2cff80-fb1c-4421-a892-a140ab4e7dec","Type":"ContainerDied","Data":"f236f3d73748ec08ad2077af7851ef08099ca6a7584573e89379a5691e3e500c"} Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.058072 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-78bb7f4c66-lspk6" Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.385441 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-655b4bfd7-48p76"] Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.386073 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-655b4bfd7-48p76" podUID="957f5537-848b-45b5-9bc2-7dbffbad0fed" containerName="neutron-api" containerID="cri-o://cd7e8ef4fb775371b61ba5efc5102f4c2da426998320120eda0d0b268a3d9f96" gracePeriod=30 Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.386761 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-655b4bfd7-48p76" podUID="957f5537-848b-45b5-9bc2-7dbffbad0fed" containerName="neutron-httpd" containerID="cri-o://58c48f2cdb1d647834064b7f623d4739cee1b51c67caeb895f426688b3d47862" gracePeriod=30 Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.415588 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-655b4bfd7-48p76" Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.496057 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5765cfccfc-zqg5s"] Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.497728 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5765cfccfc-zqg5s" Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.588163 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5765cfccfc-zqg5s"] Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.640716 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xqwp\" (UniqueName: \"kubernetes.io/projected/985d2863-cf61-4125-9842-28ec8706dea9-kube-api-access-7xqwp\") pod \"neutron-5765cfccfc-zqg5s\" (UID: \"985d2863-cf61-4125-9842-28ec8706dea9\") " pod="openstack/neutron-5765cfccfc-zqg5s" Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.640807 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/985d2863-cf61-4125-9842-28ec8706dea9-ovndb-tls-certs\") pod \"neutron-5765cfccfc-zqg5s\" (UID: \"985d2863-cf61-4125-9842-28ec8706dea9\") " pod="openstack/neutron-5765cfccfc-zqg5s" Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.640847 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/985d2863-cf61-4125-9842-28ec8706dea9-httpd-config\") pod \"neutron-5765cfccfc-zqg5s\" (UID: \"985d2863-cf61-4125-9842-28ec8706dea9\") " pod="openstack/neutron-5765cfccfc-zqg5s" Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.640870 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/985d2863-cf61-4125-9842-28ec8706dea9-config\") pod \"neutron-5765cfccfc-zqg5s\" (UID: \"985d2863-cf61-4125-9842-28ec8706dea9\") " pod="openstack/neutron-5765cfccfc-zqg5s" Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.640895 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/985d2863-cf61-4125-9842-28ec8706dea9-internal-tls-certs\") pod \"neutron-5765cfccfc-zqg5s\" (UID: \"985d2863-cf61-4125-9842-28ec8706dea9\") " pod="openstack/neutron-5765cfccfc-zqg5s" Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.640953 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/985d2863-cf61-4125-9842-28ec8706dea9-combined-ca-bundle\") pod \"neutron-5765cfccfc-zqg5s\" (UID: \"985d2863-cf61-4125-9842-28ec8706dea9\") " pod="openstack/neutron-5765cfccfc-zqg5s" Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.641021 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/985d2863-cf61-4125-9842-28ec8706dea9-public-tls-certs\") pod \"neutron-5765cfccfc-zqg5s\" (UID: \"985d2863-cf61-4125-9842-28ec8706dea9\") " pod="openstack/neutron-5765cfccfc-zqg5s" Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.742902 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/985d2863-cf61-4125-9842-28ec8706dea9-combined-ca-bundle\") pod \"neutron-5765cfccfc-zqg5s\" (UID: \"985d2863-cf61-4125-9842-28ec8706dea9\") " pod="openstack/neutron-5765cfccfc-zqg5s" Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.743030 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/985d2863-cf61-4125-9842-28ec8706dea9-public-tls-certs\") pod \"neutron-5765cfccfc-zqg5s\" (UID: \"985d2863-cf61-4125-9842-28ec8706dea9\") " pod="openstack/neutron-5765cfccfc-zqg5s" Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.743079 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xqwp\" (UniqueName: \"kubernetes.io/projected/985d2863-cf61-4125-9842-28ec8706dea9-kube-api-access-7xqwp\") pod \"neutron-5765cfccfc-zqg5s\" (UID: \"985d2863-cf61-4125-9842-28ec8706dea9\") " pod="openstack/neutron-5765cfccfc-zqg5s" Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.743142 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/985d2863-cf61-4125-9842-28ec8706dea9-ovndb-tls-certs\") pod \"neutron-5765cfccfc-zqg5s\" (UID: \"985d2863-cf61-4125-9842-28ec8706dea9\") " pod="openstack/neutron-5765cfccfc-zqg5s" Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.743187 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/985d2863-cf61-4125-9842-28ec8706dea9-httpd-config\") pod \"neutron-5765cfccfc-zqg5s\" (UID: \"985d2863-cf61-4125-9842-28ec8706dea9\") " pod="openstack/neutron-5765cfccfc-zqg5s" Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.743216 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/985d2863-cf61-4125-9842-28ec8706dea9-config\") pod \"neutron-5765cfccfc-zqg5s\" (UID: \"985d2863-cf61-4125-9842-28ec8706dea9\") " pod="openstack/neutron-5765cfccfc-zqg5s" Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.743240 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/985d2863-cf61-4125-9842-28ec8706dea9-internal-tls-certs\") pod \"neutron-5765cfccfc-zqg5s\" (UID: \"985d2863-cf61-4125-9842-28ec8706dea9\") " pod="openstack/neutron-5765cfccfc-zqg5s" Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.749220 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/985d2863-cf61-4125-9842-28ec8706dea9-ovndb-tls-certs\") pod \"neutron-5765cfccfc-zqg5s\" (UID: \"985d2863-cf61-4125-9842-28ec8706dea9\") " pod="openstack/neutron-5765cfccfc-zqg5s" Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.751884 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/985d2863-cf61-4125-9842-28ec8706dea9-combined-ca-bundle\") pod \"neutron-5765cfccfc-zqg5s\" (UID: \"985d2863-cf61-4125-9842-28ec8706dea9\") " pod="openstack/neutron-5765cfccfc-zqg5s" Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.773076 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/985d2863-cf61-4125-9842-28ec8706dea9-config\") pod \"neutron-5765cfccfc-zqg5s\" (UID: \"985d2863-cf61-4125-9842-28ec8706dea9\") " pod="openstack/neutron-5765cfccfc-zqg5s" Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.773095 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/985d2863-cf61-4125-9842-28ec8706dea9-internal-tls-certs\") pod \"neutron-5765cfccfc-zqg5s\" (UID: \"985d2863-cf61-4125-9842-28ec8706dea9\") " pod="openstack/neutron-5765cfccfc-zqg5s" Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.773169 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/985d2863-cf61-4125-9842-28ec8706dea9-httpd-config\") pod \"neutron-5765cfccfc-zqg5s\" (UID: \"985d2863-cf61-4125-9842-28ec8706dea9\") " pod="openstack/neutron-5765cfccfc-zqg5s" Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.773245 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.776045 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/985d2863-cf61-4125-9842-28ec8706dea9-public-tls-certs\") pod \"neutron-5765cfccfc-zqg5s\" (UID: \"985d2863-cf61-4125-9842-28ec8706dea9\") " pod="openstack/neutron-5765cfccfc-zqg5s" Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.779769 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xqwp\" (UniqueName: \"kubernetes.io/projected/985d2863-cf61-4125-9842-28ec8706dea9-kube-api-access-7xqwp\") pod \"neutron-5765cfccfc-zqg5s\" (UID: \"985d2863-cf61-4125-9842-28ec8706dea9\") " pod="openstack/neutron-5765cfccfc-zqg5s" Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.844129 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20f4c2e0-6bac-4c5a-affd-48f2d8301111-scripts\") pod \"20f4c2e0-6bac-4c5a-affd-48f2d8301111\" (UID: \"20f4c2e0-6bac-4c5a-affd-48f2d8301111\") " Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.844186 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20f4c2e0-6bac-4c5a-affd-48f2d8301111-combined-ca-bundle\") pod \"20f4c2e0-6bac-4c5a-affd-48f2d8301111\" (UID: \"20f4c2e0-6bac-4c5a-affd-48f2d8301111\") " Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.844251 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f4c2e0-6bac-4c5a-affd-48f2d8301111-log-httpd\") pod \"20f4c2e0-6bac-4c5a-affd-48f2d8301111\" (UID: \"20f4c2e0-6bac-4c5a-affd-48f2d8301111\") " Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.844844 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f4c2e0-6bac-4c5a-affd-48f2d8301111-run-httpd\") pod \"20f4c2e0-6bac-4c5a-affd-48f2d8301111\" (UID: \"20f4c2e0-6bac-4c5a-affd-48f2d8301111\") " Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.844941 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8s88n\" (UniqueName: \"kubernetes.io/projected/20f4c2e0-6bac-4c5a-affd-48f2d8301111-kube-api-access-8s88n\") pod \"20f4c2e0-6bac-4c5a-affd-48f2d8301111\" (UID: \"20f4c2e0-6bac-4c5a-affd-48f2d8301111\") " Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.844999 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20f4c2e0-6bac-4c5a-affd-48f2d8301111-config-data\") pod \"20f4c2e0-6bac-4c5a-affd-48f2d8301111\" (UID: \"20f4c2e0-6bac-4c5a-affd-48f2d8301111\") " Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.845764 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20f4c2e0-6bac-4c5a-affd-48f2d8301111-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "20f4c2e0-6bac-4c5a-affd-48f2d8301111" (UID: "20f4c2e0-6bac-4c5a-affd-48f2d8301111"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.845088 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20f4c2e0-6bac-4c5a-affd-48f2d8301111-sg-core-conf-yaml\") pod \"20f4c2e0-6bac-4c5a-affd-48f2d8301111\" (UID: \"20f4c2e0-6bac-4c5a-affd-48f2d8301111\") " Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.847414 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20f4c2e0-6bac-4c5a-affd-48f2d8301111-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "20f4c2e0-6bac-4c5a-affd-48f2d8301111" (UID: "20f4c2e0-6bac-4c5a-affd-48f2d8301111"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.853161 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20f4c2e0-6bac-4c5a-affd-48f2d8301111-scripts" (OuterVolumeSpecName: "scripts") pod "20f4c2e0-6bac-4c5a-affd-48f2d8301111" (UID: "20f4c2e0-6bac-4c5a-affd-48f2d8301111"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.853234 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20f4c2e0-6bac-4c5a-affd-48f2d8301111-kube-api-access-8s88n" (OuterVolumeSpecName: "kube-api-access-8s88n") pod "20f4c2e0-6bac-4c5a-affd-48f2d8301111" (UID: "20f4c2e0-6bac-4c5a-affd-48f2d8301111"). InnerVolumeSpecName "kube-api-access-8s88n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.854139 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20f4c2e0-6bac-4c5a-affd-48f2d8301111-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.854179 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f4c2e0-6bac-4c5a-affd-48f2d8301111-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.854192 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f4c2e0-6bac-4c5a-affd-48f2d8301111-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.854204 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8s88n\" (UniqueName: \"kubernetes.io/projected/20f4c2e0-6bac-4c5a-affd-48f2d8301111-kube-api-access-8s88n\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.904402 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20f4c2e0-6bac-4c5a-affd-48f2d8301111-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "20f4c2e0-6bac-4c5a-affd-48f2d8301111" (UID: "20f4c2e0-6bac-4c5a-affd-48f2d8301111"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.934011 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"11466796-476b-4a46-9859-1770359abf01","Type":"ContainerStarted","Data":"029d22d3e0daf583af1c5ca3ca4e1d5103dcdbc75358867ceae420aa0081295b"} Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.939329 4858 generic.go:334] "Generic (PLEG): container finished" podID="957f5537-848b-45b5-9bc2-7dbffbad0fed" containerID="58c48f2cdb1d647834064b7f623d4739cee1b51c67caeb895f426688b3d47862" exitCode=0 Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.939398 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-655b4bfd7-48p76" event={"ID":"957f5537-848b-45b5-9bc2-7dbffbad0fed","Type":"ContainerDied","Data":"58c48f2cdb1d647834064b7f623d4739cee1b51c67caeb895f426688b3d47862"} Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.955581 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20f4c2e0-6bac-4c5a-affd-48f2d8301111-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.959888 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-rk27f" event={"ID":"be2cff80-fb1c-4421-a892-a140ab4e7dec","Type":"ContainerStarted","Data":"1ac1e831913db6c85c069349536f5e09fc41a2e3f94be33d1f755715e18e3c17"} Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.960189 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-rk27f" Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.975240 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20f4c2e0-6bac-4c5a-affd-48f2d8301111-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20f4c2e0-6bac-4c5a-affd-48f2d8301111" (UID: "20f4c2e0-6bac-4c5a-affd-48f2d8301111"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.978421 4858 generic.go:334] "Generic (PLEG): container finished" podID="20f4c2e0-6bac-4c5a-affd-48f2d8301111" containerID="9b71df5e3093339ed27494207b4e8407bc23ffb89e50cc0d982bb84e06fecfe1" exitCode=0 Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.978507 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f4c2e0-6bac-4c5a-affd-48f2d8301111","Type":"ContainerDied","Data":"9b71df5e3093339ed27494207b4e8407bc23ffb89e50cc0d982bb84e06fecfe1"} Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.978571 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f4c2e0-6bac-4c5a-affd-48f2d8301111","Type":"ContainerDied","Data":"1f7ef965e23a29514664736538ca6e53b6e2913e318e7b595a31dc91bb52bc54"} Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.978595 4858 scope.go:117] "RemoveContainer" containerID="f740f8acdcfb5956b56966e7bdcf5b795f07aad95bc399ab0f3a3f5fc78e4c5d" Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.978880 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.985138 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20f4c2e0-6bac-4c5a-affd-48f2d8301111-config-data" (OuterVolumeSpecName: "config-data") pod "20f4c2e0-6bac-4c5a-affd-48f2d8301111" (UID: "20f4c2e0-6bac-4c5a-affd-48f2d8301111"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:11 crc kubenswrapper[4858]: I0202 17:32:11.993721 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5765cfccfc-zqg5s" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.003967 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.007047 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-rk27f" podStartSLOduration=4.007024858 podStartE2EDuration="4.007024858s" podCreationTimestamp="2026-02-02 17:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:32:11.99205838 +0000 UTC m=+1033.144473645" watchObservedRunningTime="2026-02-02 17:32:12.007024858 +0000 UTC m=+1033.159440123" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.057572 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20f4c2e0-6bac-4c5a-affd-48f2d8301111-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.057598 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20f4c2e0-6bac-4c5a-affd-48f2d8301111-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.114148 4858 scope.go:117] "RemoveContainer" containerID="ef780359f3f63d18376f8ff91714675ffb9b0bfe8b28f2f8ebfce32c385ba9e2" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.206411 4858 scope.go:117] "RemoveContainer" containerID="9b71df5e3093339ed27494207b4e8407bc23ffb89e50cc0d982bb84e06fecfe1" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.377182 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.390128 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.416145 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20f4c2e0-6bac-4c5a-affd-48f2d8301111" path="/var/lib/kubelet/pods/20f4c2e0-6bac-4c5a-affd-48f2d8301111/volumes" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.416925 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:32:12 crc kubenswrapper[4858]: E0202 17:32:12.417248 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20f4c2e0-6bac-4c5a-affd-48f2d8301111" containerName="proxy-httpd" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.417265 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="20f4c2e0-6bac-4c5a-affd-48f2d8301111" containerName="proxy-httpd" Feb 02 17:32:12 crc kubenswrapper[4858]: E0202 17:32:12.417280 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20f4c2e0-6bac-4c5a-affd-48f2d8301111" containerName="ceilometer-notification-agent" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.417287 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="20f4c2e0-6bac-4c5a-affd-48f2d8301111" containerName="ceilometer-notification-agent" Feb 02 17:32:12 crc kubenswrapper[4858]: E0202 17:32:12.417303 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20f4c2e0-6bac-4c5a-affd-48f2d8301111" containerName="sg-core" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.417308 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="20f4c2e0-6bac-4c5a-affd-48f2d8301111" containerName="sg-core" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.417469 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="20f4c2e0-6bac-4c5a-affd-48f2d8301111" containerName="sg-core" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.417485 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="20f4c2e0-6bac-4c5a-affd-48f2d8301111" containerName="proxy-httpd" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.417493 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="20f4c2e0-6bac-4c5a-affd-48f2d8301111" containerName="ceilometer-notification-agent" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.419214 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.423399 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.423412 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.428359 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.446614 4858 scope.go:117] "RemoveContainer" containerID="f740f8acdcfb5956b56966e7bdcf5b795f07aad95bc399ab0f3a3f5fc78e4c5d" Feb 02 17:32:12 crc kubenswrapper[4858]: E0202 17:32:12.450194 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f740f8acdcfb5956b56966e7bdcf5b795f07aad95bc399ab0f3a3f5fc78e4c5d\": container with ID starting with f740f8acdcfb5956b56966e7bdcf5b795f07aad95bc399ab0f3a3f5fc78e4c5d not found: ID does not exist" containerID="f740f8acdcfb5956b56966e7bdcf5b795f07aad95bc399ab0f3a3f5fc78e4c5d" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.450227 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f740f8acdcfb5956b56966e7bdcf5b795f07aad95bc399ab0f3a3f5fc78e4c5d"} err="failed to get container status \"f740f8acdcfb5956b56966e7bdcf5b795f07aad95bc399ab0f3a3f5fc78e4c5d\": rpc error: code = NotFound desc = could not find container \"f740f8acdcfb5956b56966e7bdcf5b795f07aad95bc399ab0f3a3f5fc78e4c5d\": container with ID starting with f740f8acdcfb5956b56966e7bdcf5b795f07aad95bc399ab0f3a3f5fc78e4c5d not found: ID does not exist" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.450249 4858 scope.go:117] "RemoveContainer" containerID="ef780359f3f63d18376f8ff91714675ffb9b0bfe8b28f2f8ebfce32c385ba9e2" Feb 02 17:32:12 crc kubenswrapper[4858]: E0202 17:32:12.450593 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef780359f3f63d18376f8ff91714675ffb9b0bfe8b28f2f8ebfce32c385ba9e2\": container with ID starting with ef780359f3f63d18376f8ff91714675ffb9b0bfe8b28f2f8ebfce32c385ba9e2 not found: ID does not exist" containerID="ef780359f3f63d18376f8ff91714675ffb9b0bfe8b28f2f8ebfce32c385ba9e2" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.450625 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef780359f3f63d18376f8ff91714675ffb9b0bfe8b28f2f8ebfce32c385ba9e2"} err="failed to get container status \"ef780359f3f63d18376f8ff91714675ffb9b0bfe8b28f2f8ebfce32c385ba9e2\": rpc error: code = NotFound desc = could not find container \"ef780359f3f63d18376f8ff91714675ffb9b0bfe8b28f2f8ebfce32c385ba9e2\": container with ID starting with ef780359f3f63d18376f8ff91714675ffb9b0bfe8b28f2f8ebfce32c385ba9e2 not found: ID does not exist" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.450663 4858 scope.go:117] "RemoveContainer" containerID="9b71df5e3093339ed27494207b4e8407bc23ffb89e50cc0d982bb84e06fecfe1" Feb 02 17:32:12 crc kubenswrapper[4858]: E0202 17:32:12.451022 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b71df5e3093339ed27494207b4e8407bc23ffb89e50cc0d982bb84e06fecfe1\": container with ID starting with 9b71df5e3093339ed27494207b4e8407bc23ffb89e50cc0d982bb84e06fecfe1 not found: ID does not exist" containerID="9b71df5e3093339ed27494207b4e8407bc23ffb89e50cc0d982bb84e06fecfe1" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.451051 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b71df5e3093339ed27494207b4e8407bc23ffb89e50cc0d982bb84e06fecfe1"} err="failed to get container status \"9b71df5e3093339ed27494207b4e8407bc23ffb89e50cc0d982bb84e06fecfe1\": rpc error: code = NotFound desc = could not find container \"9b71df5e3093339ed27494207b4e8407bc23ffb89e50cc0d982bb84e06fecfe1\": container with ID starting with 9b71df5e3093339ed27494207b4e8407bc23ffb89e50cc0d982bb84e06fecfe1 not found: ID does not exist" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.467420 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0fef11f2-a89c-48f7-b0e8-4ed3045e028e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0fef11f2-a89c-48f7-b0e8-4ed3045e028e\") " pod="openstack/ceilometer-0" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.467464 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fef11f2-a89c-48f7-b0e8-4ed3045e028e-config-data\") pod \"ceilometer-0\" (UID: \"0fef11f2-a89c-48f7-b0e8-4ed3045e028e\") " pod="openstack/ceilometer-0" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.467546 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fef11f2-a89c-48f7-b0e8-4ed3045e028e-scripts\") pod \"ceilometer-0\" (UID: \"0fef11f2-a89c-48f7-b0e8-4ed3045e028e\") " pod="openstack/ceilometer-0" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.467564 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz4ms\" (UniqueName: \"kubernetes.io/projected/0fef11f2-a89c-48f7-b0e8-4ed3045e028e-kube-api-access-qz4ms\") pod \"ceilometer-0\" (UID: \"0fef11f2-a89c-48f7-b0e8-4ed3045e028e\") " pod="openstack/ceilometer-0" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.467630 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fef11f2-a89c-48f7-b0e8-4ed3045e028e-run-httpd\") pod \"ceilometer-0\" (UID: \"0fef11f2-a89c-48f7-b0e8-4ed3045e028e\") " pod="openstack/ceilometer-0" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.467656 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fef11f2-a89c-48f7-b0e8-4ed3045e028e-log-httpd\") pod \"ceilometer-0\" (UID: \"0fef11f2-a89c-48f7-b0e8-4ed3045e028e\") " pod="openstack/ceilometer-0" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.467692 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fef11f2-a89c-48f7-b0e8-4ed3045e028e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0fef11f2-a89c-48f7-b0e8-4ed3045e028e\") " pod="openstack/ceilometer-0" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.569424 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fef11f2-a89c-48f7-b0e8-4ed3045e028e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0fef11f2-a89c-48f7-b0e8-4ed3045e028e\") " pod="openstack/ceilometer-0" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.569491 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0fef11f2-a89c-48f7-b0e8-4ed3045e028e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0fef11f2-a89c-48f7-b0e8-4ed3045e028e\") " pod="openstack/ceilometer-0" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.569526 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fef11f2-a89c-48f7-b0e8-4ed3045e028e-config-data\") pod \"ceilometer-0\" (UID: \"0fef11f2-a89c-48f7-b0e8-4ed3045e028e\") " pod="openstack/ceilometer-0" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.569618 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fef11f2-a89c-48f7-b0e8-4ed3045e028e-scripts\") pod \"ceilometer-0\" (UID: \"0fef11f2-a89c-48f7-b0e8-4ed3045e028e\") " pod="openstack/ceilometer-0" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.569644 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz4ms\" (UniqueName: \"kubernetes.io/projected/0fef11f2-a89c-48f7-b0e8-4ed3045e028e-kube-api-access-qz4ms\") pod \"ceilometer-0\" (UID: \"0fef11f2-a89c-48f7-b0e8-4ed3045e028e\") " pod="openstack/ceilometer-0" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.569707 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fef11f2-a89c-48f7-b0e8-4ed3045e028e-run-httpd\") pod \"ceilometer-0\" (UID: \"0fef11f2-a89c-48f7-b0e8-4ed3045e028e\") " pod="openstack/ceilometer-0" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.569739 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fef11f2-a89c-48f7-b0e8-4ed3045e028e-log-httpd\") pod \"ceilometer-0\" (UID: \"0fef11f2-a89c-48f7-b0e8-4ed3045e028e\") " pod="openstack/ceilometer-0" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.570269 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fef11f2-a89c-48f7-b0e8-4ed3045e028e-log-httpd\") pod \"ceilometer-0\" (UID: \"0fef11f2-a89c-48f7-b0e8-4ed3045e028e\") " pod="openstack/ceilometer-0" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.571875 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fef11f2-a89c-48f7-b0e8-4ed3045e028e-run-httpd\") pod \"ceilometer-0\" (UID: \"0fef11f2-a89c-48f7-b0e8-4ed3045e028e\") " pod="openstack/ceilometer-0" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.577629 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fef11f2-a89c-48f7-b0e8-4ed3045e028e-scripts\") pod \"ceilometer-0\" (UID: \"0fef11f2-a89c-48f7-b0e8-4ed3045e028e\") " pod="openstack/ceilometer-0" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.577939 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fef11f2-a89c-48f7-b0e8-4ed3045e028e-config-data\") pod \"ceilometer-0\" (UID: \"0fef11f2-a89c-48f7-b0e8-4ed3045e028e\") " pod="openstack/ceilometer-0" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.582177 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fef11f2-a89c-48f7-b0e8-4ed3045e028e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0fef11f2-a89c-48f7-b0e8-4ed3045e028e\") " pod="openstack/ceilometer-0" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.603241 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz4ms\" (UniqueName: \"kubernetes.io/projected/0fef11f2-a89c-48f7-b0e8-4ed3045e028e-kube-api-access-qz4ms\") pod \"ceilometer-0\" (UID: \"0fef11f2-a89c-48f7-b0e8-4ed3045e028e\") " pod="openstack/ceilometer-0" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.608768 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0fef11f2-a89c-48f7-b0e8-4ed3045e028e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0fef11f2-a89c-48f7-b0e8-4ed3045e028e\") " pod="openstack/ceilometer-0" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.744839 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 17:32:12 crc kubenswrapper[4858]: I0202 17:32:12.783265 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5765cfccfc-zqg5s"] Feb 02 17:32:13 crc kubenswrapper[4858]: I0202 17:32:13.033967 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5765cfccfc-zqg5s" event={"ID":"985d2863-cf61-4125-9842-28ec8706dea9","Type":"ContainerStarted","Data":"17d5e155b01645cdc94c1256fc38243a54f693c6b1b844ee7c5622275bbd12f3"} Feb 02 17:32:13 crc kubenswrapper[4858]: I0202 17:32:13.038841 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3","Type":"ContainerStarted","Data":"bc50c8b01de0282f96aa61e69e3f4b51606922211a2174d2dd17dbbb7353540f"} Feb 02 17:32:13 crc kubenswrapper[4858]: I0202 17:32:13.042761 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"11466796-476b-4a46-9859-1770359abf01","Type":"ContainerStarted","Data":"fe12b5881303f5dc66f0bf36b5bf3d8257c37a26f68f71a9a95d0aa4febcb7b3"} Feb 02 17:32:13 crc kubenswrapper[4858]: I0202 17:32:13.042910 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="11466796-476b-4a46-9859-1770359abf01" containerName="cinder-api-log" containerID="cri-o://029d22d3e0daf583af1c5ca3ca4e1d5103dcdbc75358867ceae420aa0081295b" gracePeriod=30 Feb 02 17:32:13 crc kubenswrapper[4858]: I0202 17:32:13.043034 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 02 17:32:13 crc kubenswrapper[4858]: I0202 17:32:13.043360 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="11466796-476b-4a46-9859-1770359abf01" containerName="cinder-api" containerID="cri-o://fe12b5881303f5dc66f0bf36b5bf3d8257c37a26f68f71a9a95d0aa4febcb7b3" gracePeriod=30 Feb 02 17:32:13 crc kubenswrapper[4858]: I0202 17:32:13.072088 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.072061024 podStartE2EDuration="5.072061024s" podCreationTimestamp="2026-02-02 17:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:32:13.062676486 +0000 UTC m=+1034.215091751" watchObservedRunningTime="2026-02-02 17:32:13.072061024 +0000 UTC m=+1034.224476309" Feb 02 17:32:13 crc kubenswrapper[4858]: I0202 17:32:13.333884 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:32:13 crc kubenswrapper[4858]: I0202 17:32:13.711112 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-655b4bfd7-48p76" podUID="957f5537-848b-45b5-9bc2-7dbffbad0fed" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.155:9696/\": dial tcp 10.217.0.155:9696: connect: connection refused" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.010253 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75f97b655d-lv8wc" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.104639 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2hnq\" (UniqueName: \"kubernetes.io/projected/de5f94f8-30f4-4e19-8195-eb6a5b281de9-kube-api-access-d2hnq\") pod \"de5f94f8-30f4-4e19-8195-eb6a5b281de9\" (UID: \"de5f94f8-30f4-4e19-8195-eb6a5b281de9\") " Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.104712 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5f94f8-30f4-4e19-8195-eb6a5b281de9-combined-ca-bundle\") pod \"de5f94f8-30f4-4e19-8195-eb6a5b281de9\" (UID: \"de5f94f8-30f4-4e19-8195-eb6a5b281de9\") " Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.104994 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de5f94f8-30f4-4e19-8195-eb6a5b281de9-logs\") pod \"de5f94f8-30f4-4e19-8195-eb6a5b281de9\" (UID: \"de5f94f8-30f4-4e19-8195-eb6a5b281de9\") " Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.105048 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de5f94f8-30f4-4e19-8195-eb6a5b281de9-config-data-custom\") pod \"de5f94f8-30f4-4e19-8195-eb6a5b281de9\" (UID: \"de5f94f8-30f4-4e19-8195-eb6a5b281de9\") " Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.105129 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de5f94f8-30f4-4e19-8195-eb6a5b281de9-config-data\") pod \"de5f94f8-30f4-4e19-8195-eb6a5b281de9\" (UID: \"de5f94f8-30f4-4e19-8195-eb6a5b281de9\") " Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.105309 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.105416 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de5f94f8-30f4-4e19-8195-eb6a5b281de9-logs" (OuterVolumeSpecName: "logs") pod "de5f94f8-30f4-4e19-8195-eb6a5b281de9" (UID: "de5f94f8-30f4-4e19-8195-eb6a5b281de9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.105458 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fef11f2-a89c-48f7-b0e8-4ed3045e028e","Type":"ContainerStarted","Data":"3ea822e7c5b19934b295eb6a1203209e64f2757a03d69a65bb08085b1b7cd3e5"} Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.105744 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de5f94f8-30f4-4e19-8195-eb6a5b281de9-logs\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.111450 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de5f94f8-30f4-4e19-8195-eb6a5b281de9-kube-api-access-d2hnq" (OuterVolumeSpecName: "kube-api-access-d2hnq") pod "de5f94f8-30f4-4e19-8195-eb6a5b281de9" (UID: "de5f94f8-30f4-4e19-8195-eb6a5b281de9"). InnerVolumeSpecName "kube-api-access-d2hnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.117243 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5765cfccfc-zqg5s" event={"ID":"985d2863-cf61-4125-9842-28ec8706dea9","Type":"ContainerStarted","Data":"658bb58a8be90895afb5e8007201b6a8dd577038208071bbc857c66ed21e61a2"} Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.117281 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5765cfccfc-zqg5s" event={"ID":"985d2863-cf61-4125-9842-28ec8706dea9","Type":"ContainerStarted","Data":"dacac73ab5ba89261e21959731794d3772299676d5e913dac6928214ef303e4f"} Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.118102 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5765cfccfc-zqg5s" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.120414 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de5f94f8-30f4-4e19-8195-eb6a5b281de9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "de5f94f8-30f4-4e19-8195-eb6a5b281de9" (UID: "de5f94f8-30f4-4e19-8195-eb6a5b281de9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.121363 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3","Type":"ContainerStarted","Data":"725caa535900479fe43f0a1df52a011b37ac2e952969c31f59b39ba34191b7f1"} Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.141681 4858 generic.go:334] "Generic (PLEG): container finished" podID="de5f94f8-30f4-4e19-8195-eb6a5b281de9" containerID="540513034c786c00b6429acad3e30eb43b0270445fb7b440a654639682246522" exitCode=0 Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.141761 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75f97b655d-lv8wc" event={"ID":"de5f94f8-30f4-4e19-8195-eb6a5b281de9","Type":"ContainerDied","Data":"540513034c786c00b6429acad3e30eb43b0270445fb7b440a654639682246522"} Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.141782 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75f97b655d-lv8wc" event={"ID":"de5f94f8-30f4-4e19-8195-eb6a5b281de9","Type":"ContainerDied","Data":"f344e047124d4ad7d2073e3a6242a0ac5130802186f0c74b2ebd1bd621f66e28"} Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.141797 4858 scope.go:117] "RemoveContainer" containerID="540513034c786c00b6429acad3e30eb43b0270445fb7b440a654639682246522" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.141921 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75f97b655d-lv8wc" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.188353 4858 scope.go:117] "RemoveContainer" containerID="9cdf417ee38737aebb2ded473bfa0470de4201c406e99c606db9f406d2f66242" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.188537 4858 generic.go:334] "Generic (PLEG): container finished" podID="11466796-476b-4a46-9859-1770359abf01" containerID="fe12b5881303f5dc66f0bf36b5bf3d8257c37a26f68f71a9a95d0aa4febcb7b3" exitCode=0 Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.188554 4858 generic.go:334] "Generic (PLEG): container finished" podID="11466796-476b-4a46-9859-1770359abf01" containerID="029d22d3e0daf583af1c5ca3ca4e1d5103dcdbc75358867ceae420aa0081295b" exitCode=143 Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.188574 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"11466796-476b-4a46-9859-1770359abf01","Type":"ContainerDied","Data":"fe12b5881303f5dc66f0bf36b5bf3d8257c37a26f68f71a9a95d0aa4febcb7b3"} Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.188599 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"11466796-476b-4a46-9859-1770359abf01","Type":"ContainerDied","Data":"029d22d3e0daf583af1c5ca3ca4e1d5103dcdbc75358867ceae420aa0081295b"} Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.188609 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"11466796-476b-4a46-9859-1770359abf01","Type":"ContainerDied","Data":"66b27f64d1173c7a483504df6129f8a1b6f1219e369ea4be8569658de91865e5"} Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.188638 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.189353 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.169913321 podStartE2EDuration="6.189340863s" podCreationTimestamp="2026-02-02 17:32:08 +0000 UTC" firstStartedPulling="2026-02-02 17:32:09.892650454 +0000 UTC m=+1031.045065709" lastFinishedPulling="2026-02-02 17:32:10.912077986 +0000 UTC m=+1032.064493251" observedRunningTime="2026-02-02 17:32:14.188939502 +0000 UTC m=+1035.341354777" watchObservedRunningTime="2026-02-02 17:32:14.189340863 +0000 UTC m=+1035.341756128" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.189754 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5765cfccfc-zqg5s" podStartSLOduration=3.189748635 podStartE2EDuration="3.189748635s" podCreationTimestamp="2026-02-02 17:32:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:32:14.157498903 +0000 UTC m=+1035.309914168" watchObservedRunningTime="2026-02-02 17:32:14.189748635 +0000 UTC m=+1035.342163890" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.195114 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de5f94f8-30f4-4e19-8195-eb6a5b281de9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de5f94f8-30f4-4e19-8195-eb6a5b281de9" (UID: "de5f94f8-30f4-4e19-8195-eb6a5b281de9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.199956 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de5f94f8-30f4-4e19-8195-eb6a5b281de9-config-data" (OuterVolumeSpecName: "config-data") pod "de5f94f8-30f4-4e19-8195-eb6a5b281de9" (UID: "de5f94f8-30f4-4e19-8195-eb6a5b281de9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.206558 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11466796-476b-4a46-9859-1770359abf01-combined-ca-bundle\") pod \"11466796-476b-4a46-9859-1770359abf01\" (UID: \"11466796-476b-4a46-9859-1770359abf01\") " Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.206724 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/11466796-476b-4a46-9859-1770359abf01-etc-machine-id\") pod \"11466796-476b-4a46-9859-1770359abf01\" (UID: \"11466796-476b-4a46-9859-1770359abf01\") " Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.206753 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/11466796-476b-4a46-9859-1770359abf01-config-data-custom\") pod \"11466796-476b-4a46-9859-1770359abf01\" (UID: \"11466796-476b-4a46-9859-1770359abf01\") " Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.206786 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11466796-476b-4a46-9859-1770359abf01-scripts\") pod \"11466796-476b-4a46-9859-1770359abf01\" (UID: \"11466796-476b-4a46-9859-1770359abf01\") " Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.206817 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11466796-476b-4a46-9859-1770359abf01-config-data\") pod \"11466796-476b-4a46-9859-1770359abf01\" (UID: \"11466796-476b-4a46-9859-1770359abf01\") " Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.206874 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11466796-476b-4a46-9859-1770359abf01-logs\") pod \"11466796-476b-4a46-9859-1770359abf01\" (UID: \"11466796-476b-4a46-9859-1770359abf01\") " Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.206945 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5bdf\" (UniqueName: \"kubernetes.io/projected/11466796-476b-4a46-9859-1770359abf01-kube-api-access-l5bdf\") pod \"11466796-476b-4a46-9859-1770359abf01\" (UID: \"11466796-476b-4a46-9859-1770359abf01\") " Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.207599 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de5f94f8-30f4-4e19-8195-eb6a5b281de9-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.207619 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de5f94f8-30f4-4e19-8195-eb6a5b281de9-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.207632 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2hnq\" (UniqueName: \"kubernetes.io/projected/de5f94f8-30f4-4e19-8195-eb6a5b281de9-kube-api-access-d2hnq\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.207646 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5f94f8-30f4-4e19-8195-eb6a5b281de9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.213562 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11466796-476b-4a46-9859-1770359abf01-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "11466796-476b-4a46-9859-1770359abf01" (UID: "11466796-476b-4a46-9859-1770359abf01"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.213612 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11466796-476b-4a46-9859-1770359abf01-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "11466796-476b-4a46-9859-1770359abf01" (UID: "11466796-476b-4a46-9859-1770359abf01"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.213825 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11466796-476b-4a46-9859-1770359abf01-logs" (OuterVolumeSpecName: "logs") pod "11466796-476b-4a46-9859-1770359abf01" (UID: "11466796-476b-4a46-9859-1770359abf01"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.214290 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11466796-476b-4a46-9859-1770359abf01-kube-api-access-l5bdf" (OuterVolumeSpecName: "kube-api-access-l5bdf") pod "11466796-476b-4a46-9859-1770359abf01" (UID: "11466796-476b-4a46-9859-1770359abf01"). InnerVolumeSpecName "kube-api-access-l5bdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.245551 4858 scope.go:117] "RemoveContainer" containerID="540513034c786c00b6429acad3e30eb43b0270445fb7b440a654639682246522" Feb 02 17:32:14 crc kubenswrapper[4858]: E0202 17:32:14.247198 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"540513034c786c00b6429acad3e30eb43b0270445fb7b440a654639682246522\": container with ID starting with 540513034c786c00b6429acad3e30eb43b0270445fb7b440a654639682246522 not found: ID does not exist" containerID="540513034c786c00b6429acad3e30eb43b0270445fb7b440a654639682246522" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.247365 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"540513034c786c00b6429acad3e30eb43b0270445fb7b440a654639682246522"} err="failed to get container status \"540513034c786c00b6429acad3e30eb43b0270445fb7b440a654639682246522\": rpc error: code = NotFound desc = could not find container \"540513034c786c00b6429acad3e30eb43b0270445fb7b440a654639682246522\": container with ID starting with 540513034c786c00b6429acad3e30eb43b0270445fb7b440a654639682246522 not found: ID does not exist" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.247462 4858 scope.go:117] "RemoveContainer" containerID="9cdf417ee38737aebb2ded473bfa0470de4201c406e99c606db9f406d2f66242" Feb 02 17:32:14 crc kubenswrapper[4858]: E0202 17:32:14.248364 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cdf417ee38737aebb2ded473bfa0470de4201c406e99c606db9f406d2f66242\": container with ID starting with 9cdf417ee38737aebb2ded473bfa0470de4201c406e99c606db9f406d2f66242 not found: ID does not exist" containerID="9cdf417ee38737aebb2ded473bfa0470de4201c406e99c606db9f406d2f66242" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.248459 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cdf417ee38737aebb2ded473bfa0470de4201c406e99c606db9f406d2f66242"} err="failed to get container status \"9cdf417ee38737aebb2ded473bfa0470de4201c406e99c606db9f406d2f66242\": rpc error: code = NotFound desc = could not find container \"9cdf417ee38737aebb2ded473bfa0470de4201c406e99c606db9f406d2f66242\": container with ID starting with 9cdf417ee38737aebb2ded473bfa0470de4201c406e99c606db9f406d2f66242 not found: ID does not exist" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.248526 4858 scope.go:117] "RemoveContainer" containerID="fe12b5881303f5dc66f0bf36b5bf3d8257c37a26f68f71a9a95d0aa4febcb7b3" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.250082 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11466796-476b-4a46-9859-1770359abf01-scripts" (OuterVolumeSpecName: "scripts") pod "11466796-476b-4a46-9859-1770359abf01" (UID: "11466796-476b-4a46-9859-1770359abf01"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.279123 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11466796-476b-4a46-9859-1770359abf01-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11466796-476b-4a46-9859-1770359abf01" (UID: "11466796-476b-4a46-9859-1770359abf01"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.309852 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11466796-476b-4a46-9859-1770359abf01-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.311615 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11466796-476b-4a46-9859-1770359abf01-logs\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.312424 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5bdf\" (UniqueName: \"kubernetes.io/projected/11466796-476b-4a46-9859-1770359abf01-kube-api-access-l5bdf\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.312445 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11466796-476b-4a46-9859-1770359abf01-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.312456 4858 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/11466796-476b-4a46-9859-1770359abf01-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.312465 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/11466796-476b-4a46-9859-1770359abf01-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.314761 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11466796-476b-4a46-9859-1770359abf01-config-data" (OuterVolumeSpecName: "config-data") pod "11466796-476b-4a46-9859-1770359abf01" (UID: "11466796-476b-4a46-9859-1770359abf01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.403459 4858 scope.go:117] "RemoveContainer" containerID="029d22d3e0daf583af1c5ca3ca4e1d5103dcdbc75358867ceae420aa0081295b" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.422272 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11466796-476b-4a46-9859-1770359abf01-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.438232 4858 scope.go:117] "RemoveContainer" containerID="fe12b5881303f5dc66f0bf36b5bf3d8257c37a26f68f71a9a95d0aa4febcb7b3" Feb 02 17:32:14 crc kubenswrapper[4858]: E0202 17:32:14.438707 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe12b5881303f5dc66f0bf36b5bf3d8257c37a26f68f71a9a95d0aa4febcb7b3\": container with ID starting with fe12b5881303f5dc66f0bf36b5bf3d8257c37a26f68f71a9a95d0aa4febcb7b3 not found: ID does not exist" containerID="fe12b5881303f5dc66f0bf36b5bf3d8257c37a26f68f71a9a95d0aa4febcb7b3" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.438743 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe12b5881303f5dc66f0bf36b5bf3d8257c37a26f68f71a9a95d0aa4febcb7b3"} err="failed to get container status \"fe12b5881303f5dc66f0bf36b5bf3d8257c37a26f68f71a9a95d0aa4febcb7b3\": rpc error: code = NotFound desc = could not find container \"fe12b5881303f5dc66f0bf36b5bf3d8257c37a26f68f71a9a95d0aa4febcb7b3\": container with ID starting with fe12b5881303f5dc66f0bf36b5bf3d8257c37a26f68f71a9a95d0aa4febcb7b3 not found: ID does not exist" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.438768 4858 scope.go:117] "RemoveContainer" containerID="029d22d3e0daf583af1c5ca3ca4e1d5103dcdbc75358867ceae420aa0081295b" Feb 02 17:32:14 crc kubenswrapper[4858]: E0202 17:32:14.439120 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"029d22d3e0daf583af1c5ca3ca4e1d5103dcdbc75358867ceae420aa0081295b\": container with ID starting with 029d22d3e0daf583af1c5ca3ca4e1d5103dcdbc75358867ceae420aa0081295b not found: ID does not exist" containerID="029d22d3e0daf583af1c5ca3ca4e1d5103dcdbc75358867ceae420aa0081295b" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.439145 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"029d22d3e0daf583af1c5ca3ca4e1d5103dcdbc75358867ceae420aa0081295b"} err="failed to get container status \"029d22d3e0daf583af1c5ca3ca4e1d5103dcdbc75358867ceae420aa0081295b\": rpc error: code = NotFound desc = could not find container \"029d22d3e0daf583af1c5ca3ca4e1d5103dcdbc75358867ceae420aa0081295b\": container with ID starting with 029d22d3e0daf583af1c5ca3ca4e1d5103dcdbc75358867ceae420aa0081295b not found: ID does not exist" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.439162 4858 scope.go:117] "RemoveContainer" containerID="fe12b5881303f5dc66f0bf36b5bf3d8257c37a26f68f71a9a95d0aa4febcb7b3" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.439881 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe12b5881303f5dc66f0bf36b5bf3d8257c37a26f68f71a9a95d0aa4febcb7b3"} err="failed to get container status \"fe12b5881303f5dc66f0bf36b5bf3d8257c37a26f68f71a9a95d0aa4febcb7b3\": rpc error: code = NotFound desc = could not find container \"fe12b5881303f5dc66f0bf36b5bf3d8257c37a26f68f71a9a95d0aa4febcb7b3\": container with ID starting with fe12b5881303f5dc66f0bf36b5bf3d8257c37a26f68f71a9a95d0aa4febcb7b3 not found: ID does not exist" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.439903 4858 scope.go:117] "RemoveContainer" containerID="029d22d3e0daf583af1c5ca3ca4e1d5103dcdbc75358867ceae420aa0081295b" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.441323 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"029d22d3e0daf583af1c5ca3ca4e1d5103dcdbc75358867ceae420aa0081295b"} err="failed to get container status \"029d22d3e0daf583af1c5ca3ca4e1d5103dcdbc75358867ceae420aa0081295b\": rpc error: code = NotFound desc = could not find container \"029d22d3e0daf583af1c5ca3ca4e1d5103dcdbc75358867ceae420aa0081295b\": container with ID starting with 029d22d3e0daf583af1c5ca3ca4e1d5103dcdbc75358867ceae420aa0081295b not found: ID does not exist" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.487001 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-75f97b655d-lv8wc"] Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.499877 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-75f97b655d-lv8wc"] Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.518765 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.532988 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.543830 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 02 17:32:14 crc kubenswrapper[4858]: E0202 17:32:14.544278 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de5f94f8-30f4-4e19-8195-eb6a5b281de9" containerName="barbican-api" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.544301 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="de5f94f8-30f4-4e19-8195-eb6a5b281de9" containerName="barbican-api" Feb 02 17:32:14 crc kubenswrapper[4858]: E0202 17:32:14.544321 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11466796-476b-4a46-9859-1770359abf01" containerName="cinder-api-log" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.544328 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="11466796-476b-4a46-9859-1770359abf01" containerName="cinder-api-log" Feb 02 17:32:14 crc kubenswrapper[4858]: E0202 17:32:14.544342 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de5f94f8-30f4-4e19-8195-eb6a5b281de9" containerName="barbican-api-log" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.544350 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="de5f94f8-30f4-4e19-8195-eb6a5b281de9" containerName="barbican-api-log" Feb 02 17:32:14 crc kubenswrapper[4858]: E0202 17:32:14.544363 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11466796-476b-4a46-9859-1770359abf01" containerName="cinder-api" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.544370 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="11466796-476b-4a46-9859-1770359abf01" containerName="cinder-api" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.544588 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="11466796-476b-4a46-9859-1770359abf01" containerName="cinder-api-log" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.544607 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="de5f94f8-30f4-4e19-8195-eb6a5b281de9" containerName="barbican-api" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.544626 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="11466796-476b-4a46-9859-1770359abf01" containerName="cinder-api" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.544639 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="de5f94f8-30f4-4e19-8195-eb6a5b281de9" containerName="barbican-api-log" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.552087 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.563062 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.564227 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.568400 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.573443 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.733196 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8a1f97c-10b9-489f-9711-d6cd63f6e974-config-data-custom\") pod \"cinder-api-0\" (UID: \"c8a1f97c-10b9-489f-9711-d6cd63f6e974\") " pod="openstack/cinder-api-0" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.733786 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8a1f97c-10b9-489f-9711-d6cd63f6e974-scripts\") pod \"cinder-api-0\" (UID: \"c8a1f97c-10b9-489f-9711-d6cd63f6e974\") " pod="openstack/cinder-api-0" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.733938 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8a1f97c-10b9-489f-9711-d6cd63f6e974-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c8a1f97c-10b9-489f-9711-d6cd63f6e974\") " pod="openstack/cinder-api-0" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.733988 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8a1f97c-10b9-489f-9711-d6cd63f6e974-config-data\") pod \"cinder-api-0\" (UID: \"c8a1f97c-10b9-489f-9711-d6cd63f6e974\") " pod="openstack/cinder-api-0" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.734019 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmv7f\" (UniqueName: \"kubernetes.io/projected/c8a1f97c-10b9-489f-9711-d6cd63f6e974-kube-api-access-rmv7f\") pod \"cinder-api-0\" (UID: \"c8a1f97c-10b9-489f-9711-d6cd63f6e974\") " pod="openstack/cinder-api-0" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.734047 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8a1f97c-10b9-489f-9711-d6cd63f6e974-logs\") pod \"cinder-api-0\" (UID: \"c8a1f97c-10b9-489f-9711-d6cd63f6e974\") " pod="openstack/cinder-api-0" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.734085 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8a1f97c-10b9-489f-9711-d6cd63f6e974-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c8a1f97c-10b9-489f-9711-d6cd63f6e974\") " pod="openstack/cinder-api-0" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.734106 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8a1f97c-10b9-489f-9711-d6cd63f6e974-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c8a1f97c-10b9-489f-9711-d6cd63f6e974\") " pod="openstack/cinder-api-0" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.734167 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8a1f97c-10b9-489f-9711-d6cd63f6e974-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c8a1f97c-10b9-489f-9711-d6cd63f6e974\") " pod="openstack/cinder-api-0" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.841121 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8a1f97c-10b9-489f-9711-d6cd63f6e974-config-data-custom\") pod \"cinder-api-0\" (UID: \"c8a1f97c-10b9-489f-9711-d6cd63f6e974\") " pod="openstack/cinder-api-0" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.841175 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8a1f97c-10b9-489f-9711-d6cd63f6e974-scripts\") pod \"cinder-api-0\" (UID: \"c8a1f97c-10b9-489f-9711-d6cd63f6e974\") " pod="openstack/cinder-api-0" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.841204 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8a1f97c-10b9-489f-9711-d6cd63f6e974-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c8a1f97c-10b9-489f-9711-d6cd63f6e974\") " pod="openstack/cinder-api-0" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.841232 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8a1f97c-10b9-489f-9711-d6cd63f6e974-config-data\") pod \"cinder-api-0\" (UID: \"c8a1f97c-10b9-489f-9711-d6cd63f6e974\") " pod="openstack/cinder-api-0" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.841261 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmv7f\" (UniqueName: \"kubernetes.io/projected/c8a1f97c-10b9-489f-9711-d6cd63f6e974-kube-api-access-rmv7f\") pod \"cinder-api-0\" (UID: \"c8a1f97c-10b9-489f-9711-d6cd63f6e974\") " pod="openstack/cinder-api-0" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.841289 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8a1f97c-10b9-489f-9711-d6cd63f6e974-logs\") pod \"cinder-api-0\" (UID: \"c8a1f97c-10b9-489f-9711-d6cd63f6e974\") " pod="openstack/cinder-api-0" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.841317 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8a1f97c-10b9-489f-9711-d6cd63f6e974-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c8a1f97c-10b9-489f-9711-d6cd63f6e974\") " pod="openstack/cinder-api-0" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.841335 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8a1f97c-10b9-489f-9711-d6cd63f6e974-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c8a1f97c-10b9-489f-9711-d6cd63f6e974\") " pod="openstack/cinder-api-0" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.841355 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8a1f97c-10b9-489f-9711-d6cd63f6e974-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c8a1f97c-10b9-489f-9711-d6cd63f6e974\") " pod="openstack/cinder-api-0" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.842951 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8a1f97c-10b9-489f-9711-d6cd63f6e974-logs\") pod \"cinder-api-0\" (UID: \"c8a1f97c-10b9-489f-9711-d6cd63f6e974\") " pod="openstack/cinder-api-0" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.843055 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8a1f97c-10b9-489f-9711-d6cd63f6e974-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c8a1f97c-10b9-489f-9711-d6cd63f6e974\") " pod="openstack/cinder-api-0" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.848608 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8a1f97c-10b9-489f-9711-d6cd63f6e974-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c8a1f97c-10b9-489f-9711-d6cd63f6e974\") " pod="openstack/cinder-api-0" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.849776 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8a1f97c-10b9-489f-9711-d6cd63f6e974-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c8a1f97c-10b9-489f-9711-d6cd63f6e974\") " pod="openstack/cinder-api-0" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.850532 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8a1f97c-10b9-489f-9711-d6cd63f6e974-config-data-custom\") pod \"cinder-api-0\" (UID: \"c8a1f97c-10b9-489f-9711-d6cd63f6e974\") " pod="openstack/cinder-api-0" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.850826 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8a1f97c-10b9-489f-9711-d6cd63f6e974-scripts\") pod \"cinder-api-0\" (UID: \"c8a1f97c-10b9-489f-9711-d6cd63f6e974\") " pod="openstack/cinder-api-0" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.851620 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8a1f97c-10b9-489f-9711-d6cd63f6e974-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c8a1f97c-10b9-489f-9711-d6cd63f6e974\") " pod="openstack/cinder-api-0" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.863376 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmv7f\" (UniqueName: \"kubernetes.io/projected/c8a1f97c-10b9-489f-9711-d6cd63f6e974-kube-api-access-rmv7f\") pod \"cinder-api-0\" (UID: \"c8a1f97c-10b9-489f-9711-d6cd63f6e974\") " pod="openstack/cinder-api-0" Feb 02 17:32:14 crc kubenswrapper[4858]: I0202 17:32:14.882343 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8a1f97c-10b9-489f-9711-d6cd63f6e974-config-data\") pod \"cinder-api-0\" (UID: \"c8a1f97c-10b9-489f-9711-d6cd63f6e974\") " pod="openstack/cinder-api-0" Feb 02 17:32:15 crc kubenswrapper[4858]: I0202 17:32:15.175090 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 17:32:15 crc kubenswrapper[4858]: I0202 17:32:15.218694 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fef11f2-a89c-48f7-b0e8-4ed3045e028e","Type":"ContainerStarted","Data":"e26be3701a03f118e90dda4c7d592d6669de54f17ca55b29332db9682ada7d29"} Feb 02 17:32:15 crc kubenswrapper[4858]: I0202 17:32:15.218744 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fef11f2-a89c-48f7-b0e8-4ed3045e028e","Type":"ContainerStarted","Data":"4e34b3499fffc2c2e1f3d30e9679c89a662027085d35cf523cfc5754d00a98d1"} Feb 02 17:32:15 crc kubenswrapper[4858]: I0202 17:32:15.705802 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 17:32:16 crc kubenswrapper[4858]: I0202 17:32:16.245884 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c8a1f97c-10b9-489f-9711-d6cd63f6e974","Type":"ContainerStarted","Data":"86ac5d7fa764d1a655801b22e8239a83354529e35ad4f0bcf9972d5fd6d62044"} Feb 02 17:32:16 crc kubenswrapper[4858]: I0202 17:32:16.250374 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fef11f2-a89c-48f7-b0e8-4ed3045e028e","Type":"ContainerStarted","Data":"707d883e2c04c4ab14c027ab3cb258e9d8a994ceaeffe7f21f4e2125d8fbfff8"} Feb 02 17:32:16 crc kubenswrapper[4858]: I0202 17:32:16.418046 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11466796-476b-4a46-9859-1770359abf01" path="/var/lib/kubelet/pods/11466796-476b-4a46-9859-1770359abf01/volumes" Feb 02 17:32:16 crc kubenswrapper[4858]: I0202 17:32:16.418927 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de5f94f8-30f4-4e19-8195-eb6a5b281de9" path="/var/lib/kubelet/pods/de5f94f8-30f4-4e19-8195-eb6a5b281de9/volumes" Feb 02 17:32:17 crc kubenswrapper[4858]: I0202 17:32:17.277290 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c8a1f97c-10b9-489f-9711-d6cd63f6e974","Type":"ContainerStarted","Data":"9a2911340a3a38bfcea565ef020c5241977df166c26e4232cc5bb72edbd9efeb"} Feb 02 17:32:17 crc kubenswrapper[4858]: I0202 17:32:17.277806 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c8a1f97c-10b9-489f-9711-d6cd63f6e974","Type":"ContainerStarted","Data":"50d8f0ab6549ef55aa343d368b5dc3c118471ca6aca836b8afc10b3499832b49"} Feb 02 17:32:17 crc kubenswrapper[4858]: I0202 17:32:17.279545 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 02 17:32:17 crc kubenswrapper[4858]: I0202 17:32:17.312698 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.312677732 podStartE2EDuration="3.312677732s" podCreationTimestamp="2026-02-02 17:32:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:32:17.311482417 +0000 UTC m=+1038.463897682" watchObservedRunningTime="2026-02-02 17:32:17.312677732 +0000 UTC m=+1038.465092997" Feb 02 17:32:18 crc kubenswrapper[4858]: I0202 17:32:18.291194 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fef11f2-a89c-48f7-b0e8-4ed3045e028e","Type":"ContainerStarted","Data":"1d3b2ea7e207ffd24fbdac5d9c146dc0caf133d566370b056c89da13a1735044"} Feb 02 17:32:18 crc kubenswrapper[4858]: I0202 17:32:18.291532 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 17:32:18 crc kubenswrapper[4858]: I0202 17:32:18.317886 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.029580336 podStartE2EDuration="6.317865186s" podCreationTimestamp="2026-02-02 17:32:12 +0000 UTC" firstStartedPulling="2026-02-02 17:32:13.355227189 +0000 UTC m=+1034.507642454" lastFinishedPulling="2026-02-02 17:32:17.643512039 +0000 UTC m=+1038.795927304" observedRunningTime="2026-02-02 17:32:18.309550749 +0000 UTC m=+1039.461966034" watchObservedRunningTime="2026-02-02 17:32:18.317865186 +0000 UTC m=+1039.470280461" Feb 02 17:32:19 crc kubenswrapper[4858]: I0202 17:32:19.051186 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-rk27f" Feb 02 17:32:19 crc kubenswrapper[4858]: I0202 17:32:19.078320 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 02 17:32:19 crc kubenswrapper[4858]: I0202 17:32:19.141745 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-zst5s"] Feb 02 17:32:19 crc kubenswrapper[4858]: I0202 17:32:19.141986 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-zst5s" podUID="60620bf9-46f0-4b74-b019-a24815a64e3d" containerName="dnsmasq-dns" containerID="cri-o://e7f1dca080c1c44b8f46e055cfcf0cf379f9192cfa71726e6d42866e3e3398f7" gracePeriod=10 Feb 02 17:32:19 crc kubenswrapper[4858]: I0202 17:32:19.316496 4858 generic.go:334] "Generic (PLEG): container finished" podID="60620bf9-46f0-4b74-b019-a24815a64e3d" containerID="e7f1dca080c1c44b8f46e055cfcf0cf379f9192cfa71726e6d42866e3e3398f7" exitCode=0 Feb 02 17:32:19 crc kubenswrapper[4858]: I0202 17:32:19.317376 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-zst5s" event={"ID":"60620bf9-46f0-4b74-b019-a24815a64e3d","Type":"ContainerDied","Data":"e7f1dca080c1c44b8f46e055cfcf0cf379f9192cfa71726e6d42866e3e3398f7"} Feb 02 17:32:19 crc kubenswrapper[4858]: I0202 17:32:19.331148 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-85ff748b95-zst5s" podUID="60620bf9-46f0-4b74-b019-a24815a64e3d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.162:5353: connect: connection refused" Feb 02 17:32:19 crc kubenswrapper[4858]: I0202 17:32:19.454848 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 02 17:32:19 crc kubenswrapper[4858]: I0202 17:32:19.532037 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 17:32:19 crc kubenswrapper[4858]: I0202 17:32:19.733683 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-zst5s" Feb 02 17:32:19 crc kubenswrapper[4858]: I0202 17:32:19.884614 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66xvt\" (UniqueName: \"kubernetes.io/projected/60620bf9-46f0-4b74-b019-a24815a64e3d-kube-api-access-66xvt\") pod \"60620bf9-46f0-4b74-b019-a24815a64e3d\" (UID: \"60620bf9-46f0-4b74-b019-a24815a64e3d\") " Feb 02 17:32:19 crc kubenswrapper[4858]: I0202 17:32:19.884705 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/60620bf9-46f0-4b74-b019-a24815a64e3d-ovsdbserver-nb\") pod \"60620bf9-46f0-4b74-b019-a24815a64e3d\" (UID: \"60620bf9-46f0-4b74-b019-a24815a64e3d\") " Feb 02 17:32:19 crc kubenswrapper[4858]: I0202 17:32:19.884792 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/60620bf9-46f0-4b74-b019-a24815a64e3d-dns-swift-storage-0\") pod \"60620bf9-46f0-4b74-b019-a24815a64e3d\" (UID: \"60620bf9-46f0-4b74-b019-a24815a64e3d\") " Feb 02 17:32:19 crc kubenswrapper[4858]: I0202 17:32:19.884830 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/60620bf9-46f0-4b74-b019-a24815a64e3d-ovsdbserver-sb\") pod \"60620bf9-46f0-4b74-b019-a24815a64e3d\" (UID: \"60620bf9-46f0-4b74-b019-a24815a64e3d\") " Feb 02 17:32:19 crc kubenswrapper[4858]: I0202 17:32:19.884897 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60620bf9-46f0-4b74-b019-a24815a64e3d-dns-svc\") pod \"60620bf9-46f0-4b74-b019-a24815a64e3d\" (UID: \"60620bf9-46f0-4b74-b019-a24815a64e3d\") " Feb 02 17:32:19 crc kubenswrapper[4858]: I0202 17:32:19.884934 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60620bf9-46f0-4b74-b019-a24815a64e3d-config\") pod \"60620bf9-46f0-4b74-b019-a24815a64e3d\" (UID: \"60620bf9-46f0-4b74-b019-a24815a64e3d\") " Feb 02 17:32:19 crc kubenswrapper[4858]: I0202 17:32:19.893932 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60620bf9-46f0-4b74-b019-a24815a64e3d-kube-api-access-66xvt" (OuterVolumeSpecName: "kube-api-access-66xvt") pod "60620bf9-46f0-4b74-b019-a24815a64e3d" (UID: "60620bf9-46f0-4b74-b019-a24815a64e3d"). InnerVolumeSpecName "kube-api-access-66xvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:32:19 crc kubenswrapper[4858]: I0202 17:32:19.939299 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60620bf9-46f0-4b74-b019-a24815a64e3d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "60620bf9-46f0-4b74-b019-a24815a64e3d" (UID: "60620bf9-46f0-4b74-b019-a24815a64e3d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:32:19 crc kubenswrapper[4858]: I0202 17:32:19.939900 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60620bf9-46f0-4b74-b019-a24815a64e3d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "60620bf9-46f0-4b74-b019-a24815a64e3d" (UID: "60620bf9-46f0-4b74-b019-a24815a64e3d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:32:19 crc kubenswrapper[4858]: I0202 17:32:19.950739 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60620bf9-46f0-4b74-b019-a24815a64e3d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "60620bf9-46f0-4b74-b019-a24815a64e3d" (UID: "60620bf9-46f0-4b74-b019-a24815a64e3d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:32:19 crc kubenswrapper[4858]: I0202 17:32:19.955130 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60620bf9-46f0-4b74-b019-a24815a64e3d-config" (OuterVolumeSpecName: "config") pod "60620bf9-46f0-4b74-b019-a24815a64e3d" (UID: "60620bf9-46f0-4b74-b019-a24815a64e3d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:32:19 crc kubenswrapper[4858]: I0202 17:32:19.961380 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60620bf9-46f0-4b74-b019-a24815a64e3d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "60620bf9-46f0-4b74-b019-a24815a64e3d" (UID: "60620bf9-46f0-4b74-b019-a24815a64e3d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:32:19 crc kubenswrapper[4858]: I0202 17:32:19.987965 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60620bf9-46f0-4b74-b019-a24815a64e3d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:19 crc kubenswrapper[4858]: I0202 17:32:19.988019 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60620bf9-46f0-4b74-b019-a24815a64e3d-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:19 crc kubenswrapper[4858]: I0202 17:32:19.988035 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66xvt\" (UniqueName: \"kubernetes.io/projected/60620bf9-46f0-4b74-b019-a24815a64e3d-kube-api-access-66xvt\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:19 crc kubenswrapper[4858]: I0202 17:32:19.988051 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/60620bf9-46f0-4b74-b019-a24815a64e3d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:19 crc kubenswrapper[4858]: I0202 17:32:19.988063 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/60620bf9-46f0-4b74-b019-a24815a64e3d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:19 crc kubenswrapper[4858]: I0202 17:32:19.988073 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/60620bf9-46f0-4b74-b019-a24815a64e3d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:20 crc kubenswrapper[4858]: I0202 17:32:20.328481 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3" containerName="cinder-scheduler" containerID="cri-o://bc50c8b01de0282f96aa61e69e3f4b51606922211a2174d2dd17dbbb7353540f" gracePeriod=30 Feb 02 17:32:20 crc kubenswrapper[4858]: I0202 17:32:20.328582 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-zst5s" Feb 02 17:32:20 crc kubenswrapper[4858]: I0202 17:32:20.330579 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-zst5s" event={"ID":"60620bf9-46f0-4b74-b019-a24815a64e3d","Type":"ContainerDied","Data":"f40eb0679670668dbe95928d8f4e28aeafc03d31a6e886c90fb54fb49808927d"} Feb 02 17:32:20 crc kubenswrapper[4858]: I0202 17:32:20.330641 4858 scope.go:117] "RemoveContainer" containerID="e7f1dca080c1c44b8f46e055cfcf0cf379f9192cfa71726e6d42866e3e3398f7" Feb 02 17:32:20 crc kubenswrapper[4858]: I0202 17:32:20.331117 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3" containerName="probe" containerID="cri-o://725caa535900479fe43f0a1df52a011b37ac2e952969c31f59b39ba34191b7f1" gracePeriod=30 Feb 02 17:32:20 crc kubenswrapper[4858]: I0202 17:32:20.360700 4858 scope.go:117] "RemoveContainer" containerID="b3801a8076b2612ad9fe1e530c14b9f14ccc87591afa9532e5df12f6a9a5ab88" Feb 02 17:32:20 crc kubenswrapper[4858]: I0202 17:32:20.364404 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-zst5s"] Feb 02 17:32:20 crc kubenswrapper[4858]: I0202 17:32:20.370927 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-zst5s"] Feb 02 17:32:20 crc kubenswrapper[4858]: I0202 17:32:20.409543 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60620bf9-46f0-4b74-b019-a24815a64e3d" path="/var/lib/kubelet/pods/60620bf9-46f0-4b74-b019-a24815a64e3d/volumes" Feb 02 17:32:21 crc kubenswrapper[4858]: I0202 17:32:21.339211 4858 generic.go:334] "Generic (PLEG): container finished" podID="d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3" containerID="725caa535900479fe43f0a1df52a011b37ac2e952969c31f59b39ba34191b7f1" exitCode=0 Feb 02 17:32:21 crc kubenswrapper[4858]: I0202 17:32:21.339285 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3","Type":"ContainerDied","Data":"725caa535900479fe43f0a1df52a011b37ac2e952969c31f59b39ba34191b7f1"} Feb 02 17:32:22 crc kubenswrapper[4858]: I0202 17:32:22.138906 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-857c87669d-c45h7" Feb 02 17:32:22 crc kubenswrapper[4858]: I0202 17:32:22.657768 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-68f4b57796-rhdnw" Feb 02 17:32:24 crc kubenswrapper[4858]: I0202 17:32:24.015524 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-857c87669d-c45h7" Feb 02 17:32:24 crc kubenswrapper[4858]: I0202 17:32:24.076537 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-655b4bfd7-48p76" Feb 02 17:32:24 crc kubenswrapper[4858]: I0202 17:32:24.168684 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/957f5537-848b-45b5-9bc2-7dbffbad0fed-public-tls-certs\") pod \"957f5537-848b-45b5-9bc2-7dbffbad0fed\" (UID: \"957f5537-848b-45b5-9bc2-7dbffbad0fed\") " Feb 02 17:32:24 crc kubenswrapper[4858]: I0202 17:32:24.168775 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/957f5537-848b-45b5-9bc2-7dbffbad0fed-ovndb-tls-certs\") pod \"957f5537-848b-45b5-9bc2-7dbffbad0fed\" (UID: \"957f5537-848b-45b5-9bc2-7dbffbad0fed\") " Feb 02 17:32:24 crc kubenswrapper[4858]: I0202 17:32:24.169021 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gl274\" (UniqueName: \"kubernetes.io/projected/957f5537-848b-45b5-9bc2-7dbffbad0fed-kube-api-access-gl274\") pod \"957f5537-848b-45b5-9bc2-7dbffbad0fed\" (UID: \"957f5537-848b-45b5-9bc2-7dbffbad0fed\") " Feb 02 17:32:24 crc kubenswrapper[4858]: I0202 17:32:24.169101 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/957f5537-848b-45b5-9bc2-7dbffbad0fed-httpd-config\") pod \"957f5537-848b-45b5-9bc2-7dbffbad0fed\" (UID: \"957f5537-848b-45b5-9bc2-7dbffbad0fed\") " Feb 02 17:32:24 crc kubenswrapper[4858]: I0202 17:32:24.169153 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/957f5537-848b-45b5-9bc2-7dbffbad0fed-combined-ca-bundle\") pod \"957f5537-848b-45b5-9bc2-7dbffbad0fed\" (UID: \"957f5537-848b-45b5-9bc2-7dbffbad0fed\") " Feb 02 17:32:24 crc kubenswrapper[4858]: I0202 17:32:24.169176 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/957f5537-848b-45b5-9bc2-7dbffbad0fed-internal-tls-certs\") pod \"957f5537-848b-45b5-9bc2-7dbffbad0fed\" (UID: \"957f5537-848b-45b5-9bc2-7dbffbad0fed\") " Feb 02 17:32:24 crc kubenswrapper[4858]: I0202 17:32:24.169245 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/957f5537-848b-45b5-9bc2-7dbffbad0fed-config\") pod \"957f5537-848b-45b5-9bc2-7dbffbad0fed\" (UID: \"957f5537-848b-45b5-9bc2-7dbffbad0fed\") " Feb 02 17:32:24 crc kubenswrapper[4858]: I0202 17:32:24.176675 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/957f5537-848b-45b5-9bc2-7dbffbad0fed-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "957f5537-848b-45b5-9bc2-7dbffbad0fed" (UID: "957f5537-848b-45b5-9bc2-7dbffbad0fed"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:24 crc kubenswrapper[4858]: I0202 17:32:24.193873 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/957f5537-848b-45b5-9bc2-7dbffbad0fed-kube-api-access-gl274" (OuterVolumeSpecName: "kube-api-access-gl274") pod "957f5537-848b-45b5-9bc2-7dbffbad0fed" (UID: "957f5537-848b-45b5-9bc2-7dbffbad0fed"). InnerVolumeSpecName "kube-api-access-gl274". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:32:24 crc kubenswrapper[4858]: I0202 17:32:24.266155 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/957f5537-848b-45b5-9bc2-7dbffbad0fed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "957f5537-848b-45b5-9bc2-7dbffbad0fed" (UID: "957f5537-848b-45b5-9bc2-7dbffbad0fed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:24 crc kubenswrapper[4858]: I0202 17:32:24.274153 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/957f5537-848b-45b5-9bc2-7dbffbad0fed-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "957f5537-848b-45b5-9bc2-7dbffbad0fed" (UID: "957f5537-848b-45b5-9bc2-7dbffbad0fed"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:24 crc kubenswrapper[4858]: I0202 17:32:24.274614 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/957f5537-848b-45b5-9bc2-7dbffbad0fed-public-tls-certs\") pod \"957f5537-848b-45b5-9bc2-7dbffbad0fed\" (UID: \"957f5537-848b-45b5-9bc2-7dbffbad0fed\") " Feb 02 17:32:24 crc kubenswrapper[4858]: I0202 17:32:24.275186 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gl274\" (UniqueName: \"kubernetes.io/projected/957f5537-848b-45b5-9bc2-7dbffbad0fed-kube-api-access-gl274\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:24 crc kubenswrapper[4858]: I0202 17:32:24.275210 4858 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/957f5537-848b-45b5-9bc2-7dbffbad0fed-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:24 crc kubenswrapper[4858]: I0202 17:32:24.275222 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/957f5537-848b-45b5-9bc2-7dbffbad0fed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:24 crc kubenswrapper[4858]: W0202 17:32:24.275313 4858 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/957f5537-848b-45b5-9bc2-7dbffbad0fed/volumes/kubernetes.io~secret/public-tls-certs Feb 02 17:32:24 crc kubenswrapper[4858]: I0202 17:32:24.275326 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/957f5537-848b-45b5-9bc2-7dbffbad0fed-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "957f5537-848b-45b5-9bc2-7dbffbad0fed" (UID: "957f5537-848b-45b5-9bc2-7dbffbad0fed"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:24 crc kubenswrapper[4858]: I0202 17:32:24.294395 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/957f5537-848b-45b5-9bc2-7dbffbad0fed-config" (OuterVolumeSpecName: "config") pod "957f5537-848b-45b5-9bc2-7dbffbad0fed" (UID: "957f5537-848b-45b5-9bc2-7dbffbad0fed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:24 crc kubenswrapper[4858]: I0202 17:32:24.304994 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/957f5537-848b-45b5-9bc2-7dbffbad0fed-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "957f5537-848b-45b5-9bc2-7dbffbad0fed" (UID: "957f5537-848b-45b5-9bc2-7dbffbad0fed"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:24 crc kubenswrapper[4858]: I0202 17:32:24.313198 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/957f5537-848b-45b5-9bc2-7dbffbad0fed-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "957f5537-848b-45b5-9bc2-7dbffbad0fed" (UID: "957f5537-848b-45b5-9bc2-7dbffbad0fed"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:24 crc kubenswrapper[4858]: I0202 17:32:24.377129 4858 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/957f5537-848b-45b5-9bc2-7dbffbad0fed-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:24 crc kubenswrapper[4858]: I0202 17:32:24.377178 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/957f5537-848b-45b5-9bc2-7dbffbad0fed-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:24 crc kubenswrapper[4858]: I0202 17:32:24.377189 4858 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/957f5537-848b-45b5-9bc2-7dbffbad0fed-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:24 crc kubenswrapper[4858]: I0202 17:32:24.377201 4858 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/957f5537-848b-45b5-9bc2-7dbffbad0fed-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:24 crc kubenswrapper[4858]: I0202 17:32:24.384997 4858 generic.go:334] "Generic (PLEG): container finished" podID="957f5537-848b-45b5-9bc2-7dbffbad0fed" containerID="cd7e8ef4fb775371b61ba5efc5102f4c2da426998320120eda0d0b268a3d9f96" exitCode=0 Feb 02 17:32:24 crc kubenswrapper[4858]: I0202 17:32:24.385070 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-655b4bfd7-48p76" event={"ID":"957f5537-848b-45b5-9bc2-7dbffbad0fed","Type":"ContainerDied","Data":"cd7e8ef4fb775371b61ba5efc5102f4c2da426998320120eda0d0b268a3d9f96"} Feb 02 17:32:24 crc kubenswrapper[4858]: I0202 17:32:24.385428 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-655b4bfd7-48p76" event={"ID":"957f5537-848b-45b5-9bc2-7dbffbad0fed","Type":"ContainerDied","Data":"b951d087c3be3aee2a9cc1ae1bd4854c51b769ec98d267ad0660f6d51333d94b"} Feb 02 17:32:24 crc kubenswrapper[4858]: I0202 17:32:24.385576 4858 scope.go:117] "RemoveContainer" containerID="58c48f2cdb1d647834064b7f623d4739cee1b51c67caeb895f426688b3d47862" Feb 02 17:32:24 crc kubenswrapper[4858]: I0202 17:32:24.385089 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-655b4bfd7-48p76" Feb 02 17:32:24 crc kubenswrapper[4858]: I0202 17:32:24.412118 4858 scope.go:117] "RemoveContainer" containerID="cd7e8ef4fb775371b61ba5efc5102f4c2da426998320120eda0d0b268a3d9f96" Feb 02 17:32:24 crc kubenswrapper[4858]: I0202 17:32:24.487993 4858 scope.go:117] "RemoveContainer" containerID="58c48f2cdb1d647834064b7f623d4739cee1b51c67caeb895f426688b3d47862" Feb 02 17:32:24 crc kubenswrapper[4858]: I0202 17:32:24.488045 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-655b4bfd7-48p76"] Feb 02 17:32:24 crc kubenswrapper[4858]: E0202 17:32:24.495219 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58c48f2cdb1d647834064b7f623d4739cee1b51c67caeb895f426688b3d47862\": container with ID starting with 58c48f2cdb1d647834064b7f623d4739cee1b51c67caeb895f426688b3d47862 not found: ID does not exist" containerID="58c48f2cdb1d647834064b7f623d4739cee1b51c67caeb895f426688b3d47862" Feb 02 17:32:24 crc kubenswrapper[4858]: I0202 17:32:24.495269 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58c48f2cdb1d647834064b7f623d4739cee1b51c67caeb895f426688b3d47862"} err="failed to get container status \"58c48f2cdb1d647834064b7f623d4739cee1b51c67caeb895f426688b3d47862\": rpc error: code = NotFound desc = could not find container \"58c48f2cdb1d647834064b7f623d4739cee1b51c67caeb895f426688b3d47862\": container with ID starting with 58c48f2cdb1d647834064b7f623d4739cee1b51c67caeb895f426688b3d47862 not found: ID does not exist" Feb 02 17:32:24 crc kubenswrapper[4858]: I0202 17:32:24.495300 4858 scope.go:117] "RemoveContainer" containerID="cd7e8ef4fb775371b61ba5efc5102f4c2da426998320120eda0d0b268a3d9f96" Feb 02 17:32:24 crc kubenswrapper[4858]: E0202 17:32:24.499460 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd7e8ef4fb775371b61ba5efc5102f4c2da426998320120eda0d0b268a3d9f96\": container with ID starting with cd7e8ef4fb775371b61ba5efc5102f4c2da426998320120eda0d0b268a3d9f96 not found: ID does not exist" containerID="cd7e8ef4fb775371b61ba5efc5102f4c2da426998320120eda0d0b268a3d9f96" Feb 02 17:32:24 crc kubenswrapper[4858]: I0202 17:32:24.499511 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd7e8ef4fb775371b61ba5efc5102f4c2da426998320120eda0d0b268a3d9f96"} err="failed to get container status \"cd7e8ef4fb775371b61ba5efc5102f4c2da426998320120eda0d0b268a3d9f96\": rpc error: code = NotFound desc = could not find container \"cd7e8ef4fb775371b61ba5efc5102f4c2da426998320120eda0d0b268a3d9f96\": container with ID starting with cd7e8ef4fb775371b61ba5efc5102f4c2da426998320120eda0d0b268a3d9f96 not found: ID does not exist" Feb 02 17:32:24 crc kubenswrapper[4858]: I0202 17:32:24.502210 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-655b4bfd7-48p76"] Feb 02 17:32:24 crc kubenswrapper[4858]: I0202 17:32:24.697570 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5877769c8-jgqfs" Feb 02 17:32:24 crc kubenswrapper[4858]: I0202 17:32:24.811170 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-68f4b57796-rhdnw" Feb 02 17:32:24 crc kubenswrapper[4858]: I0202 17:32:24.822598 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5877769c8-jgqfs" Feb 02 17:32:24 crc kubenswrapper[4858]: I0202 17:32:24.889008 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-857c87669d-c45h7"] Feb 02 17:32:24 crc kubenswrapper[4858]: I0202 17:32:24.889277 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-857c87669d-c45h7" podUID="24d5a090-abc7-4832-b6c6-2e36edf7d82e" containerName="horizon-log" containerID="cri-o://4ea20cb217d595f212f7c30c0c9b8a9c83b72304dd0b30e106b284e161374882" gracePeriod=30 Feb 02 17:32:24 crc kubenswrapper[4858]: I0202 17:32:24.889435 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-857c87669d-c45h7" podUID="24d5a090-abc7-4832-b6c6-2e36edf7d82e" containerName="horizon" containerID="cri-o://51444d04afc916f2112443d8aa3f3ff3ae56b53e450edd0dc4f8f72fcd2a1a61" gracePeriod=30 Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.006638 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.092651 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3-config-data-custom\") pod \"d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3\" (UID: \"d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3\") " Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.093367 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3-config-data\") pod \"d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3\" (UID: \"d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3\") " Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.093433 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3-scripts\") pod \"d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3\" (UID: \"d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3\") " Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.093691 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3-combined-ca-bundle\") pod \"d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3\" (UID: \"d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3\") " Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.093821 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxhdl\" (UniqueName: \"kubernetes.io/projected/d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3-kube-api-access-zxhdl\") pod \"d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3\" (UID: \"d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3\") " Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.094041 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3-etc-machine-id\") pod \"d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3\" (UID: \"d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3\") " Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.095092 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3" (UID: "d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.102387 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3-kube-api-access-zxhdl" (OuterVolumeSpecName: "kube-api-access-zxhdl") pod "d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3" (UID: "d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3"). InnerVolumeSpecName "kube-api-access-zxhdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.116182 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3-scripts" (OuterVolumeSpecName: "scripts") pod "d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3" (UID: "d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.127229 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3" (UID: "d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.197147 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.197180 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.197192 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxhdl\" (UniqueName: \"kubernetes.io/projected/d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3-kube-api-access-zxhdl\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.197203 4858 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.204115 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3" (UID: "d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.277616 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3-config-data" (OuterVolumeSpecName: "config-data") pod "d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3" (UID: "d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.298467 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.298523 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.405309 4858 generic.go:334] "Generic (PLEG): container finished" podID="d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3" containerID="bc50c8b01de0282f96aa61e69e3f4b51606922211a2174d2dd17dbbb7353540f" exitCode=0 Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.405359 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.405437 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3","Type":"ContainerDied","Data":"bc50c8b01de0282f96aa61e69e3f4b51606922211a2174d2dd17dbbb7353540f"} Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.406030 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3","Type":"ContainerDied","Data":"a01407aa454a790a0d9867a6d9654bdb6cc2de4358b2ab199d6023ac7915ff2c"} Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.406064 4858 scope.go:117] "RemoveContainer" containerID="725caa535900479fe43f0a1df52a011b37ac2e952969c31f59b39ba34191b7f1" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.477212 4858 scope.go:117] "RemoveContainer" containerID="bc50c8b01de0282f96aa61e69e3f4b51606922211a2174d2dd17dbbb7353540f" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.480042 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.498043 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.507649 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 17:32:25 crc kubenswrapper[4858]: E0202 17:32:25.508104 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60620bf9-46f0-4b74-b019-a24815a64e3d" containerName="init" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.508127 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="60620bf9-46f0-4b74-b019-a24815a64e3d" containerName="init" Feb 02 17:32:25 crc kubenswrapper[4858]: E0202 17:32:25.508157 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="957f5537-848b-45b5-9bc2-7dbffbad0fed" containerName="neutron-httpd" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.508165 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="957f5537-848b-45b5-9bc2-7dbffbad0fed" containerName="neutron-httpd" Feb 02 17:32:25 crc kubenswrapper[4858]: E0202 17:32:25.508181 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3" containerName="cinder-scheduler" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.508188 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3" containerName="cinder-scheduler" Feb 02 17:32:25 crc kubenswrapper[4858]: E0202 17:32:25.508204 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="957f5537-848b-45b5-9bc2-7dbffbad0fed" containerName="neutron-api" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.508212 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="957f5537-848b-45b5-9bc2-7dbffbad0fed" containerName="neutron-api" Feb 02 17:32:25 crc kubenswrapper[4858]: E0202 17:32:25.508227 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60620bf9-46f0-4b74-b019-a24815a64e3d" containerName="dnsmasq-dns" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.508236 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="60620bf9-46f0-4b74-b019-a24815a64e3d" containerName="dnsmasq-dns" Feb 02 17:32:25 crc kubenswrapper[4858]: E0202 17:32:25.508264 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3" containerName="probe" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.508272 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3" containerName="probe" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.508468 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="957f5537-848b-45b5-9bc2-7dbffbad0fed" containerName="neutron-httpd" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.508483 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3" containerName="probe" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.508495 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="957f5537-848b-45b5-9bc2-7dbffbad0fed" containerName="neutron-api" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.508513 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3" containerName="cinder-scheduler" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.508538 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="60620bf9-46f0-4b74-b019-a24815a64e3d" containerName="dnsmasq-dns" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.532202 4858 scope.go:117] "RemoveContainer" containerID="725caa535900479fe43f0a1df52a011b37ac2e952969c31f59b39ba34191b7f1" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.535196 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 17:32:25 crc kubenswrapper[4858]: E0202 17:32:25.547214 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"725caa535900479fe43f0a1df52a011b37ac2e952969c31f59b39ba34191b7f1\": container with ID starting with 725caa535900479fe43f0a1df52a011b37ac2e952969c31f59b39ba34191b7f1 not found: ID does not exist" containerID="725caa535900479fe43f0a1df52a011b37ac2e952969c31f59b39ba34191b7f1" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.547272 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"725caa535900479fe43f0a1df52a011b37ac2e952969c31f59b39ba34191b7f1"} err="failed to get container status \"725caa535900479fe43f0a1df52a011b37ac2e952969c31f59b39ba34191b7f1\": rpc error: code = NotFound desc = could not find container \"725caa535900479fe43f0a1df52a011b37ac2e952969c31f59b39ba34191b7f1\": container with ID starting with 725caa535900479fe43f0a1df52a011b37ac2e952969c31f59b39ba34191b7f1 not found: ID does not exist" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.547307 4858 scope.go:117] "RemoveContainer" containerID="bc50c8b01de0282f96aa61e69e3f4b51606922211a2174d2dd17dbbb7353540f" Feb 02 17:32:25 crc kubenswrapper[4858]: E0202 17:32:25.549145 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc50c8b01de0282f96aa61e69e3f4b51606922211a2174d2dd17dbbb7353540f\": container with ID starting with bc50c8b01de0282f96aa61e69e3f4b51606922211a2174d2dd17dbbb7353540f not found: ID does not exist" containerID="bc50c8b01de0282f96aa61e69e3f4b51606922211a2174d2dd17dbbb7353540f" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.549171 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc50c8b01de0282f96aa61e69e3f4b51606922211a2174d2dd17dbbb7353540f"} err="failed to get container status \"bc50c8b01de0282f96aa61e69e3f4b51606922211a2174d2dd17dbbb7353540f\": rpc error: code = NotFound desc = could not find container \"bc50c8b01de0282f96aa61e69e3f4b51606922211a2174d2dd17dbbb7353540f\": container with ID starting with bc50c8b01de0282f96aa61e69e3f4b51606922211a2174d2dd17dbbb7353540f not found: ID does not exist" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.558494 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.562374 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.607516 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/34935d73-a8f5-4b92-83fc-734815dbb836-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"34935d73-a8f5-4b92-83fc-734815dbb836\") " pod="openstack/cinder-scheduler-0" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.607792 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/34935d73-a8f5-4b92-83fc-734815dbb836-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"34935d73-a8f5-4b92-83fc-734815dbb836\") " pod="openstack/cinder-scheduler-0" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.607914 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34935d73-a8f5-4b92-83fc-734815dbb836-scripts\") pod \"cinder-scheduler-0\" (UID: \"34935d73-a8f5-4b92-83fc-734815dbb836\") " pod="openstack/cinder-scheduler-0" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.608051 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34935d73-a8f5-4b92-83fc-734815dbb836-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"34935d73-a8f5-4b92-83fc-734815dbb836\") " pod="openstack/cinder-scheduler-0" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.608244 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34935d73-a8f5-4b92-83fc-734815dbb836-config-data\") pod \"cinder-scheduler-0\" (UID: \"34935d73-a8f5-4b92-83fc-734815dbb836\") " pod="openstack/cinder-scheduler-0" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.608365 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h6rx\" (UniqueName: \"kubernetes.io/projected/34935d73-a8f5-4b92-83fc-734815dbb836-kube-api-access-4h6rx\") pod \"cinder-scheduler-0\" (UID: \"34935d73-a8f5-4b92-83fc-734815dbb836\") " pod="openstack/cinder-scheduler-0" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.685545 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6fb4977965-lqqjm" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.710678 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/34935d73-a8f5-4b92-83fc-734815dbb836-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"34935d73-a8f5-4b92-83fc-734815dbb836\") " pod="openstack/cinder-scheduler-0" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.710744 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/34935d73-a8f5-4b92-83fc-734815dbb836-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"34935d73-a8f5-4b92-83fc-734815dbb836\") " pod="openstack/cinder-scheduler-0" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.710799 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34935d73-a8f5-4b92-83fc-734815dbb836-scripts\") pod \"cinder-scheduler-0\" (UID: \"34935d73-a8f5-4b92-83fc-734815dbb836\") " pod="openstack/cinder-scheduler-0" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.710835 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34935d73-a8f5-4b92-83fc-734815dbb836-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"34935d73-a8f5-4b92-83fc-734815dbb836\") " pod="openstack/cinder-scheduler-0" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.710895 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34935d73-a8f5-4b92-83fc-734815dbb836-config-data\") pod \"cinder-scheduler-0\" (UID: \"34935d73-a8f5-4b92-83fc-734815dbb836\") " pod="openstack/cinder-scheduler-0" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.710927 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4h6rx\" (UniqueName: \"kubernetes.io/projected/34935d73-a8f5-4b92-83fc-734815dbb836-kube-api-access-4h6rx\") pod \"cinder-scheduler-0\" (UID: \"34935d73-a8f5-4b92-83fc-734815dbb836\") " pod="openstack/cinder-scheduler-0" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.712068 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/34935d73-a8f5-4b92-83fc-734815dbb836-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"34935d73-a8f5-4b92-83fc-734815dbb836\") " pod="openstack/cinder-scheduler-0" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.716276 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/34935d73-a8f5-4b92-83fc-734815dbb836-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"34935d73-a8f5-4b92-83fc-734815dbb836\") " pod="openstack/cinder-scheduler-0" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.726471 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34935d73-a8f5-4b92-83fc-734815dbb836-config-data\") pod \"cinder-scheduler-0\" (UID: \"34935d73-a8f5-4b92-83fc-734815dbb836\") " pod="openstack/cinder-scheduler-0" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.727397 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34935d73-a8f5-4b92-83fc-734815dbb836-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"34935d73-a8f5-4b92-83fc-734815dbb836\") " pod="openstack/cinder-scheduler-0" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.743414 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34935d73-a8f5-4b92-83fc-734815dbb836-scripts\") pod \"cinder-scheduler-0\" (UID: \"34935d73-a8f5-4b92-83fc-734815dbb836\") " pod="openstack/cinder-scheduler-0" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.745186 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h6rx\" (UniqueName: \"kubernetes.io/projected/34935d73-a8f5-4b92-83fc-734815dbb836-kube-api-access-4h6rx\") pod \"cinder-scheduler-0\" (UID: \"34935d73-a8f5-4b92-83fc-734815dbb836\") " pod="openstack/cinder-scheduler-0" Feb 02 17:32:25 crc kubenswrapper[4858]: I0202 17:32:25.935862 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 17:32:26 crc kubenswrapper[4858]: I0202 17:32:26.417025 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="957f5537-848b-45b5-9bc2-7dbffbad0fed" path="/var/lib/kubelet/pods/957f5537-848b-45b5-9bc2-7dbffbad0fed/volumes" Feb 02 17:32:26 crc kubenswrapper[4858]: I0202 17:32:26.418654 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3" path="/var/lib/kubelet/pods/d1cbdfbe-d93e-4dcb-bc28-fbff6a59d3f3/volumes" Feb 02 17:32:26 crc kubenswrapper[4858]: I0202 17:32:26.427113 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 17:32:26 crc kubenswrapper[4858]: W0202 17:32:26.428458 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34935d73_a8f5_4b92_83fc_734815dbb836.slice/crio-0e0c6faa40a51b520846b3e603f4a6ce9c737420178905d9f8fd9d8b1954d0ec WatchSource:0}: Error finding container 0e0c6faa40a51b520846b3e603f4a6ce9c737420178905d9f8fd9d8b1954d0ec: Status 404 returned error can't find the container with id 0e0c6faa40a51b520846b3e603f4a6ce9c737420178905d9f8fd9d8b1954d0ec Feb 02 17:32:26 crc kubenswrapper[4858]: I0202 17:32:26.502306 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 02 17:32:26 crc kubenswrapper[4858]: I0202 17:32:26.505166 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 17:32:26 crc kubenswrapper[4858]: I0202 17:32:26.510384 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-wtmbr" Feb 02 17:32:26 crc kubenswrapper[4858]: I0202 17:32:26.510597 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 02 17:32:26 crc kubenswrapper[4858]: I0202 17:32:26.511619 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 02 17:32:26 crc kubenswrapper[4858]: I0202 17:32:26.524261 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 02 17:32:26 crc kubenswrapper[4858]: I0202 17:32:26.634481 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fb5b75ca-2fc5-41b6-99fb-f0707fb28364-openstack-config\") pod \"openstackclient\" (UID: \"fb5b75ca-2fc5-41b6-99fb-f0707fb28364\") " pod="openstack/openstackclient" Feb 02 17:32:26 crc kubenswrapper[4858]: I0202 17:32:26.634896 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb5b75ca-2fc5-41b6-99fb-f0707fb28364-combined-ca-bundle\") pod \"openstackclient\" (UID: \"fb5b75ca-2fc5-41b6-99fb-f0707fb28364\") " pod="openstack/openstackclient" Feb 02 17:32:26 crc kubenswrapper[4858]: I0202 17:32:26.634933 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtmjz\" (UniqueName: \"kubernetes.io/projected/fb5b75ca-2fc5-41b6-99fb-f0707fb28364-kube-api-access-xtmjz\") pod \"openstackclient\" (UID: \"fb5b75ca-2fc5-41b6-99fb-f0707fb28364\") " pod="openstack/openstackclient" Feb 02 17:32:26 crc kubenswrapper[4858]: I0202 17:32:26.634998 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fb5b75ca-2fc5-41b6-99fb-f0707fb28364-openstack-config-secret\") pod \"openstackclient\" (UID: \"fb5b75ca-2fc5-41b6-99fb-f0707fb28364\") " pod="openstack/openstackclient" Feb 02 17:32:26 crc kubenswrapper[4858]: I0202 17:32:26.736878 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtmjz\" (UniqueName: \"kubernetes.io/projected/fb5b75ca-2fc5-41b6-99fb-f0707fb28364-kube-api-access-xtmjz\") pod \"openstackclient\" (UID: \"fb5b75ca-2fc5-41b6-99fb-f0707fb28364\") " pod="openstack/openstackclient" Feb 02 17:32:26 crc kubenswrapper[4858]: I0202 17:32:26.736950 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fb5b75ca-2fc5-41b6-99fb-f0707fb28364-openstack-config-secret\") pod \"openstackclient\" (UID: \"fb5b75ca-2fc5-41b6-99fb-f0707fb28364\") " pod="openstack/openstackclient" Feb 02 17:32:26 crc kubenswrapper[4858]: I0202 17:32:26.737030 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fb5b75ca-2fc5-41b6-99fb-f0707fb28364-openstack-config\") pod \"openstackclient\" (UID: \"fb5b75ca-2fc5-41b6-99fb-f0707fb28364\") " pod="openstack/openstackclient" Feb 02 17:32:26 crc kubenswrapper[4858]: I0202 17:32:26.737171 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb5b75ca-2fc5-41b6-99fb-f0707fb28364-combined-ca-bundle\") pod \"openstackclient\" (UID: \"fb5b75ca-2fc5-41b6-99fb-f0707fb28364\") " pod="openstack/openstackclient" Feb 02 17:32:26 crc kubenswrapper[4858]: I0202 17:32:26.738084 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fb5b75ca-2fc5-41b6-99fb-f0707fb28364-openstack-config\") pod \"openstackclient\" (UID: \"fb5b75ca-2fc5-41b6-99fb-f0707fb28364\") " pod="openstack/openstackclient" Feb 02 17:32:26 crc kubenswrapper[4858]: I0202 17:32:26.744110 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fb5b75ca-2fc5-41b6-99fb-f0707fb28364-openstack-config-secret\") pod \"openstackclient\" (UID: \"fb5b75ca-2fc5-41b6-99fb-f0707fb28364\") " pod="openstack/openstackclient" Feb 02 17:32:26 crc kubenswrapper[4858]: I0202 17:32:26.744262 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb5b75ca-2fc5-41b6-99fb-f0707fb28364-combined-ca-bundle\") pod \"openstackclient\" (UID: \"fb5b75ca-2fc5-41b6-99fb-f0707fb28364\") " pod="openstack/openstackclient" Feb 02 17:32:26 crc kubenswrapper[4858]: I0202 17:32:26.756009 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtmjz\" (UniqueName: \"kubernetes.io/projected/fb5b75ca-2fc5-41b6-99fb-f0707fb28364-kube-api-access-xtmjz\") pod \"openstackclient\" (UID: \"fb5b75ca-2fc5-41b6-99fb-f0707fb28364\") " pod="openstack/openstackclient" Feb 02 17:32:26 crc kubenswrapper[4858]: I0202 17:32:26.858475 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 02 17:32:26 crc kubenswrapper[4858]: I0202 17:32:26.859376 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 17:32:26 crc kubenswrapper[4858]: I0202 17:32:26.872012 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 02 17:32:26 crc kubenswrapper[4858]: I0202 17:32:26.883782 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 02 17:32:26 crc kubenswrapper[4858]: I0202 17:32:26.885548 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 17:32:26 crc kubenswrapper[4858]: I0202 17:32:26.898567 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 02 17:32:26 crc kubenswrapper[4858]: I0202 17:32:26.942083 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfz97\" (UniqueName: \"kubernetes.io/projected/d0882d39-e033-4ce8-8b09-76d55e1c281c-kube-api-access-pfz97\") pod \"openstackclient\" (UID: \"d0882d39-e033-4ce8-8b09-76d55e1c281c\") " pod="openstack/openstackclient" Feb 02 17:32:26 crc kubenswrapper[4858]: I0202 17:32:26.942194 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0882d39-e033-4ce8-8b09-76d55e1c281c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d0882d39-e033-4ce8-8b09-76d55e1c281c\") " pod="openstack/openstackclient" Feb 02 17:32:26 crc kubenswrapper[4858]: I0202 17:32:26.942224 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d0882d39-e033-4ce8-8b09-76d55e1c281c-openstack-config-secret\") pod \"openstackclient\" (UID: \"d0882d39-e033-4ce8-8b09-76d55e1c281c\") " pod="openstack/openstackclient" Feb 02 17:32:26 crc kubenswrapper[4858]: I0202 17:32:26.942255 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d0882d39-e033-4ce8-8b09-76d55e1c281c-openstack-config\") pod \"openstackclient\" (UID: \"d0882d39-e033-4ce8-8b09-76d55e1c281c\") " pod="openstack/openstackclient" Feb 02 17:32:26 crc kubenswrapper[4858]: E0202 17:32:26.993639 4858 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 02 17:32:26 crc kubenswrapper[4858]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_fb5b75ca-2fc5-41b6-99fb-f0707fb28364_0(661881bee7d7056502a20a9dbcaf52b74fac93140c811055f3603d120d7a570c): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"661881bee7d7056502a20a9dbcaf52b74fac93140c811055f3603d120d7a570c" Netns:"/var/run/netns/ede83b8d-0792-4a10-9d73-1eb2a002edaf" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=661881bee7d7056502a20a9dbcaf52b74fac93140c811055f3603d120d7a570c;K8S_POD_UID=fb5b75ca-2fc5-41b6-99fb-f0707fb28364" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/fb5b75ca-2fc5-41b6-99fb-f0707fb28364]: expected pod UID "fb5b75ca-2fc5-41b6-99fb-f0707fb28364" but got "d0882d39-e033-4ce8-8b09-76d55e1c281c" from Kube API Feb 02 17:32:26 crc kubenswrapper[4858]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 02 17:32:26 crc kubenswrapper[4858]: > Feb 02 17:32:26 crc kubenswrapper[4858]: E0202 17:32:26.993698 4858 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 02 17:32:26 crc kubenswrapper[4858]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_fb5b75ca-2fc5-41b6-99fb-f0707fb28364_0(661881bee7d7056502a20a9dbcaf52b74fac93140c811055f3603d120d7a570c): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"661881bee7d7056502a20a9dbcaf52b74fac93140c811055f3603d120d7a570c" Netns:"/var/run/netns/ede83b8d-0792-4a10-9d73-1eb2a002edaf" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=661881bee7d7056502a20a9dbcaf52b74fac93140c811055f3603d120d7a570c;K8S_POD_UID=fb5b75ca-2fc5-41b6-99fb-f0707fb28364" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/fb5b75ca-2fc5-41b6-99fb-f0707fb28364]: expected pod UID "fb5b75ca-2fc5-41b6-99fb-f0707fb28364" but got "d0882d39-e033-4ce8-8b09-76d55e1c281c" from Kube API Feb 02 17:32:26 crc kubenswrapper[4858]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 02 17:32:26 crc kubenswrapper[4858]: > pod="openstack/openstackclient" Feb 02 17:32:27 crc kubenswrapper[4858]: I0202 17:32:27.043674 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfz97\" (UniqueName: \"kubernetes.io/projected/d0882d39-e033-4ce8-8b09-76d55e1c281c-kube-api-access-pfz97\") pod \"openstackclient\" (UID: \"d0882d39-e033-4ce8-8b09-76d55e1c281c\") " pod="openstack/openstackclient" Feb 02 17:32:27 crc kubenswrapper[4858]: I0202 17:32:27.044161 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0882d39-e033-4ce8-8b09-76d55e1c281c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d0882d39-e033-4ce8-8b09-76d55e1c281c\") " pod="openstack/openstackclient" Feb 02 17:32:27 crc kubenswrapper[4858]: I0202 17:32:27.044206 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d0882d39-e033-4ce8-8b09-76d55e1c281c-openstack-config-secret\") pod \"openstackclient\" (UID: \"d0882d39-e033-4ce8-8b09-76d55e1c281c\") " pod="openstack/openstackclient" Feb 02 17:32:27 crc kubenswrapper[4858]: I0202 17:32:27.044251 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d0882d39-e033-4ce8-8b09-76d55e1c281c-openstack-config\") pod \"openstackclient\" (UID: \"d0882d39-e033-4ce8-8b09-76d55e1c281c\") " pod="openstack/openstackclient" Feb 02 17:32:27 crc kubenswrapper[4858]: I0202 17:32:27.045344 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d0882d39-e033-4ce8-8b09-76d55e1c281c-openstack-config\") pod \"openstackclient\" (UID: \"d0882d39-e033-4ce8-8b09-76d55e1c281c\") " pod="openstack/openstackclient" Feb 02 17:32:27 crc kubenswrapper[4858]: I0202 17:32:27.049343 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0882d39-e033-4ce8-8b09-76d55e1c281c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d0882d39-e033-4ce8-8b09-76d55e1c281c\") " pod="openstack/openstackclient" Feb 02 17:32:27 crc kubenswrapper[4858]: I0202 17:32:27.050022 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d0882d39-e033-4ce8-8b09-76d55e1c281c-openstack-config-secret\") pod \"openstackclient\" (UID: \"d0882d39-e033-4ce8-8b09-76d55e1c281c\") " pod="openstack/openstackclient" Feb 02 17:32:27 crc kubenswrapper[4858]: I0202 17:32:27.067663 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfz97\" (UniqueName: \"kubernetes.io/projected/d0882d39-e033-4ce8-8b09-76d55e1c281c-kube-api-access-pfz97\") pod \"openstackclient\" (UID: \"d0882d39-e033-4ce8-8b09-76d55e1c281c\") " pod="openstack/openstackclient" Feb 02 17:32:27 crc kubenswrapper[4858]: I0202 17:32:27.318725 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 17:32:27 crc kubenswrapper[4858]: I0202 17:32:27.466893 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 17:32:27 crc kubenswrapper[4858]: I0202 17:32:27.467816 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"34935d73-a8f5-4b92-83fc-734815dbb836","Type":"ContainerStarted","Data":"491c46b369383caf777e1e1af2d20a8dfee781915bee84a47968db44c16e0115"} Feb 02 17:32:27 crc kubenswrapper[4858]: I0202 17:32:27.467843 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"34935d73-a8f5-4b92-83fc-734815dbb836","Type":"ContainerStarted","Data":"0e0c6faa40a51b520846b3e603f4a6ce9c737420178905d9f8fd9d8b1954d0ec"} Feb 02 17:32:27 crc kubenswrapper[4858]: I0202 17:32:27.472939 4858 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="fb5b75ca-2fc5-41b6-99fb-f0707fb28364" podUID="d0882d39-e033-4ce8-8b09-76d55e1c281c" Feb 02 17:32:27 crc kubenswrapper[4858]: I0202 17:32:27.490777 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 17:32:27 crc kubenswrapper[4858]: I0202 17:32:27.562681 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fb5b75ca-2fc5-41b6-99fb-f0707fb28364-openstack-config-secret\") pod \"fb5b75ca-2fc5-41b6-99fb-f0707fb28364\" (UID: \"fb5b75ca-2fc5-41b6-99fb-f0707fb28364\") " Feb 02 17:32:27 crc kubenswrapper[4858]: I0202 17:32:27.562851 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb5b75ca-2fc5-41b6-99fb-f0707fb28364-combined-ca-bundle\") pod \"fb5b75ca-2fc5-41b6-99fb-f0707fb28364\" (UID: \"fb5b75ca-2fc5-41b6-99fb-f0707fb28364\") " Feb 02 17:32:27 crc kubenswrapper[4858]: I0202 17:32:27.562916 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtmjz\" (UniqueName: \"kubernetes.io/projected/fb5b75ca-2fc5-41b6-99fb-f0707fb28364-kube-api-access-xtmjz\") pod \"fb5b75ca-2fc5-41b6-99fb-f0707fb28364\" (UID: \"fb5b75ca-2fc5-41b6-99fb-f0707fb28364\") " Feb 02 17:32:27 crc kubenswrapper[4858]: I0202 17:32:27.563146 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fb5b75ca-2fc5-41b6-99fb-f0707fb28364-openstack-config\") pod \"fb5b75ca-2fc5-41b6-99fb-f0707fb28364\" (UID: \"fb5b75ca-2fc5-41b6-99fb-f0707fb28364\") " Feb 02 17:32:27 crc kubenswrapper[4858]: I0202 17:32:27.573809 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb5b75ca-2fc5-41b6-99fb-f0707fb28364-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "fb5b75ca-2fc5-41b6-99fb-f0707fb28364" (UID: "fb5b75ca-2fc5-41b6-99fb-f0707fb28364"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:32:27 crc kubenswrapper[4858]: I0202 17:32:27.577346 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb5b75ca-2fc5-41b6-99fb-f0707fb28364-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb5b75ca-2fc5-41b6-99fb-f0707fb28364" (UID: "fb5b75ca-2fc5-41b6-99fb-f0707fb28364"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:27 crc kubenswrapper[4858]: I0202 17:32:27.604103 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb5b75ca-2fc5-41b6-99fb-f0707fb28364-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "fb5b75ca-2fc5-41b6-99fb-f0707fb28364" (UID: "fb5b75ca-2fc5-41b6-99fb-f0707fb28364"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:27 crc kubenswrapper[4858]: I0202 17:32:27.604257 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb5b75ca-2fc5-41b6-99fb-f0707fb28364-kube-api-access-xtmjz" (OuterVolumeSpecName: "kube-api-access-xtmjz") pod "fb5b75ca-2fc5-41b6-99fb-f0707fb28364" (UID: "fb5b75ca-2fc5-41b6-99fb-f0707fb28364"). InnerVolumeSpecName "kube-api-access-xtmjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:32:27 crc kubenswrapper[4858]: I0202 17:32:27.665235 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb5b75ca-2fc5-41b6-99fb-f0707fb28364-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:27 crc kubenswrapper[4858]: I0202 17:32:27.665267 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtmjz\" (UniqueName: \"kubernetes.io/projected/fb5b75ca-2fc5-41b6-99fb-f0707fb28364-kube-api-access-xtmjz\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:27 crc kubenswrapper[4858]: I0202 17:32:27.665279 4858 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fb5b75ca-2fc5-41b6-99fb-f0707fb28364-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:27 crc kubenswrapper[4858]: I0202 17:32:27.665291 4858 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fb5b75ca-2fc5-41b6-99fb-f0707fb28364-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:27 crc kubenswrapper[4858]: I0202 17:32:27.735420 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 02 17:32:27 crc kubenswrapper[4858]: I0202 17:32:27.807886 4858 patch_prober.go:28] interesting pod/machine-config-daemon-lbvl2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 17:32:27 crc kubenswrapper[4858]: I0202 17:32:27.807945 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 17:32:27 crc kubenswrapper[4858]: I0202 17:32:27.808012 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" Feb 02 17:32:27 crc kubenswrapper[4858]: I0202 17:32:27.809013 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b38cf52a6ef125bca2bfc0fb953106251c191f34dbe401ffe7c0fa9cbe521a8f"} pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 17:32:27 crc kubenswrapper[4858]: I0202 17:32:27.809122 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" containerID="cri-o://b38cf52a6ef125bca2bfc0fb953106251c191f34dbe401ffe7c0fa9cbe521a8f" gracePeriod=600 Feb 02 17:32:28 crc kubenswrapper[4858]: I0202 17:32:28.094307 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 02 17:32:28 crc kubenswrapper[4858]: I0202 17:32:28.382283 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-b4fd7664d-fqkmq" Feb 02 17:32:28 crc kubenswrapper[4858]: I0202 17:32:28.415776 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb5b75ca-2fc5-41b6-99fb-f0707fb28364" path="/var/lib/kubelet/pods/fb5b75ca-2fc5-41b6-99fb-f0707fb28364/volumes" Feb 02 17:32:28 crc kubenswrapper[4858]: I0202 17:32:28.430692 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-b4fd7664d-fqkmq" Feb 02 17:32:28 crc kubenswrapper[4858]: I0202 17:32:28.521925 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"d0882d39-e033-4ce8-8b09-76d55e1c281c","Type":"ContainerStarted","Data":"64544125625fb206459f10e627d7c6d4d47a7c6be44bdd81c4056990777e03ec"} Feb 02 17:32:28 crc kubenswrapper[4858]: I0202 17:32:28.536502 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5877769c8-jgqfs"] Feb 02 17:32:28 crc kubenswrapper[4858]: I0202 17:32:28.536742 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5877769c8-jgqfs" podUID="72c71bde-c7f7-4e51-955d-e9a808664d2a" containerName="placement-log" containerID="cri-o://f11b6accc3bda352b71cd450a066557c0c3998983364e31d72eeee29df6e77d4" gracePeriod=30 Feb 02 17:32:28 crc kubenswrapper[4858]: I0202 17:32:28.537356 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5877769c8-jgqfs" podUID="72c71bde-c7f7-4e51-955d-e9a808664d2a" containerName="placement-api" containerID="cri-o://21da8d326ef04a2cef068e5d8c2a31a29d7d76ead36bd218d9d80d6f6a1691a7" gracePeriod=30 Feb 02 17:32:28 crc kubenswrapper[4858]: I0202 17:32:28.541160 4858 generic.go:334] "Generic (PLEG): container finished" podID="24d5a090-abc7-4832-b6c6-2e36edf7d82e" containerID="51444d04afc916f2112443d8aa3f3ff3ae56b53e450edd0dc4f8f72fcd2a1a61" exitCode=0 Feb 02 17:32:28 crc kubenswrapper[4858]: I0202 17:32:28.541254 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-857c87669d-c45h7" event={"ID":"24d5a090-abc7-4832-b6c6-2e36edf7d82e","Type":"ContainerDied","Data":"51444d04afc916f2112443d8aa3f3ff3ae56b53e450edd0dc4f8f72fcd2a1a61"} Feb 02 17:32:28 crc kubenswrapper[4858]: I0202 17:32:28.586512 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"34935d73-a8f5-4b92-83fc-734815dbb836","Type":"ContainerStarted","Data":"2d8278efd446e3c28365c0d7b97fa71472d048e36b77c107a608818ef816d84c"} Feb 02 17:32:28 crc kubenswrapper[4858]: I0202 17:32:28.591342 4858 generic.go:334] "Generic (PLEG): container finished" podID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerID="b38cf52a6ef125bca2bfc0fb953106251c191f34dbe401ffe7c0fa9cbe521a8f" exitCode=0 Feb 02 17:32:28 crc kubenswrapper[4858]: I0202 17:32:28.592146 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" event={"ID":"d03a4872-ca6a-4233-bdbf-b31f7890dc3e","Type":"ContainerDied","Data":"b38cf52a6ef125bca2bfc0fb953106251c191f34dbe401ffe7c0fa9cbe521a8f"} Feb 02 17:32:28 crc kubenswrapper[4858]: I0202 17:32:28.592178 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" event={"ID":"d03a4872-ca6a-4233-bdbf-b31f7890dc3e","Type":"ContainerStarted","Data":"29f5b545eb82d931c7c8ceb6afb897d3a7adcbb180bbad52cb5301078f6256a8"} Feb 02 17:32:28 crc kubenswrapper[4858]: I0202 17:32:28.592196 4858 scope.go:117] "RemoveContainer" containerID="a4515f303cdc3d4371d56381b323ae5d013576ca1083c363dcab5d75f03e2725" Feb 02 17:32:28 crc kubenswrapper[4858]: I0202 17:32:28.592306 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 17:32:28 crc kubenswrapper[4858]: I0202 17:32:28.620427 4858 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="fb5b75ca-2fc5-41b6-99fb-f0707fb28364" podUID="d0882d39-e033-4ce8-8b09-76d55e1c281c" Feb 02 17:32:28 crc kubenswrapper[4858]: I0202 17:32:28.660848 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.660820611 podStartE2EDuration="3.660820611s" podCreationTimestamp="2026-02-02 17:32:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:32:28.613463087 +0000 UTC m=+1049.765878352" watchObservedRunningTime="2026-02-02 17:32:28.660820611 +0000 UTC m=+1049.813235876" Feb 02 17:32:29 crc kubenswrapper[4858]: I0202 17:32:29.603310 4858 generic.go:334] "Generic (PLEG): container finished" podID="72c71bde-c7f7-4e51-955d-e9a808664d2a" containerID="f11b6accc3bda352b71cd450a066557c0c3998983364e31d72eeee29df6e77d4" exitCode=143 Feb 02 17:32:29 crc kubenswrapper[4858]: I0202 17:32:29.603518 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5877769c8-jgqfs" event={"ID":"72c71bde-c7f7-4e51-955d-e9a808664d2a","Type":"ContainerDied","Data":"f11b6accc3bda352b71cd450a066557c0c3998983364e31d72eeee29df6e77d4"} Feb 02 17:32:30 crc kubenswrapper[4858]: I0202 17:32:30.221955 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-857c87669d-c45h7" podUID="24d5a090-abc7-4832-b6c6-2e36edf7d82e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Feb 02 17:32:30 crc kubenswrapper[4858]: I0202 17:32:30.935918 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 02 17:32:32 crc kubenswrapper[4858]: I0202 17:32:32.359206 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5877769c8-jgqfs" Feb 02 17:32:32 crc kubenswrapper[4858]: I0202 17:32:32.506761 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/72c71bde-c7f7-4e51-955d-e9a808664d2a-public-tls-certs\") pod \"72c71bde-c7f7-4e51-955d-e9a808664d2a\" (UID: \"72c71bde-c7f7-4e51-955d-e9a808664d2a\") " Feb 02 17:32:32 crc kubenswrapper[4858]: I0202 17:32:32.506831 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c71bde-c7f7-4e51-955d-e9a808664d2a-combined-ca-bundle\") pod \"72c71bde-c7f7-4e51-955d-e9a808664d2a\" (UID: \"72c71bde-c7f7-4e51-955d-e9a808664d2a\") " Feb 02 17:32:32 crc kubenswrapper[4858]: I0202 17:32:32.506879 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72c71bde-c7f7-4e51-955d-e9a808664d2a-scripts\") pod \"72c71bde-c7f7-4e51-955d-e9a808664d2a\" (UID: \"72c71bde-c7f7-4e51-955d-e9a808664d2a\") " Feb 02 17:32:32 crc kubenswrapper[4858]: I0202 17:32:32.506937 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72c71bde-c7f7-4e51-955d-e9a808664d2a-logs\") pod \"72c71bde-c7f7-4e51-955d-e9a808664d2a\" (UID: \"72c71bde-c7f7-4e51-955d-e9a808664d2a\") " Feb 02 17:32:32 crc kubenswrapper[4858]: I0202 17:32:32.507024 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqd5c\" (UniqueName: \"kubernetes.io/projected/72c71bde-c7f7-4e51-955d-e9a808664d2a-kube-api-access-zqd5c\") pod \"72c71bde-c7f7-4e51-955d-e9a808664d2a\" (UID: \"72c71bde-c7f7-4e51-955d-e9a808664d2a\") " Feb 02 17:32:32 crc kubenswrapper[4858]: I0202 17:32:32.507051 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/72c71bde-c7f7-4e51-955d-e9a808664d2a-internal-tls-certs\") pod \"72c71bde-c7f7-4e51-955d-e9a808664d2a\" (UID: \"72c71bde-c7f7-4e51-955d-e9a808664d2a\") " Feb 02 17:32:32 crc kubenswrapper[4858]: I0202 17:32:32.507074 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72c71bde-c7f7-4e51-955d-e9a808664d2a-config-data\") pod \"72c71bde-c7f7-4e51-955d-e9a808664d2a\" (UID: \"72c71bde-c7f7-4e51-955d-e9a808664d2a\") " Feb 02 17:32:32 crc kubenswrapper[4858]: I0202 17:32:32.507529 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72c71bde-c7f7-4e51-955d-e9a808664d2a-logs" (OuterVolumeSpecName: "logs") pod "72c71bde-c7f7-4e51-955d-e9a808664d2a" (UID: "72c71bde-c7f7-4e51-955d-e9a808664d2a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:32:32 crc kubenswrapper[4858]: I0202 17:32:32.513876 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72c71bde-c7f7-4e51-955d-e9a808664d2a-scripts" (OuterVolumeSpecName: "scripts") pod "72c71bde-c7f7-4e51-955d-e9a808664d2a" (UID: "72c71bde-c7f7-4e51-955d-e9a808664d2a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:32 crc kubenswrapper[4858]: I0202 17:32:32.521450 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72c71bde-c7f7-4e51-955d-e9a808664d2a-kube-api-access-zqd5c" (OuterVolumeSpecName: "kube-api-access-zqd5c") pod "72c71bde-c7f7-4e51-955d-e9a808664d2a" (UID: "72c71bde-c7f7-4e51-955d-e9a808664d2a"). InnerVolumeSpecName "kube-api-access-zqd5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:32:32 crc kubenswrapper[4858]: I0202 17:32:32.588253 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72c71bde-c7f7-4e51-955d-e9a808664d2a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72c71bde-c7f7-4e51-955d-e9a808664d2a" (UID: "72c71bde-c7f7-4e51-955d-e9a808664d2a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:32 crc kubenswrapper[4858]: I0202 17:32:32.597112 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72c71bde-c7f7-4e51-955d-e9a808664d2a-config-data" (OuterVolumeSpecName: "config-data") pod "72c71bde-c7f7-4e51-955d-e9a808664d2a" (UID: "72c71bde-c7f7-4e51-955d-e9a808664d2a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:32 crc kubenswrapper[4858]: I0202 17:32:32.609492 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72c71bde-c7f7-4e51-955d-e9a808664d2a-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:32 crc kubenswrapper[4858]: I0202 17:32:32.609803 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72c71bde-c7f7-4e51-955d-e9a808664d2a-logs\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:32 crc kubenswrapper[4858]: I0202 17:32:32.609942 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqd5c\" (UniqueName: \"kubernetes.io/projected/72c71bde-c7f7-4e51-955d-e9a808664d2a-kube-api-access-zqd5c\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:32 crc kubenswrapper[4858]: I0202 17:32:32.610066 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72c71bde-c7f7-4e51-955d-e9a808664d2a-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:32 crc kubenswrapper[4858]: I0202 17:32:32.610148 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c71bde-c7f7-4e51-955d-e9a808664d2a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:32 crc kubenswrapper[4858]: I0202 17:32:32.641558 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72c71bde-c7f7-4e51-955d-e9a808664d2a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "72c71bde-c7f7-4e51-955d-e9a808664d2a" (UID: "72c71bde-c7f7-4e51-955d-e9a808664d2a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:32 crc kubenswrapper[4858]: I0202 17:32:32.666189 4858 generic.go:334] "Generic (PLEG): container finished" podID="72c71bde-c7f7-4e51-955d-e9a808664d2a" containerID="21da8d326ef04a2cef068e5d8c2a31a29d7d76ead36bd218d9d80d6f6a1691a7" exitCode=0 Feb 02 17:32:32 crc kubenswrapper[4858]: I0202 17:32:32.666230 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5877769c8-jgqfs" event={"ID":"72c71bde-c7f7-4e51-955d-e9a808664d2a","Type":"ContainerDied","Data":"21da8d326ef04a2cef068e5d8c2a31a29d7d76ead36bd218d9d80d6f6a1691a7"} Feb 02 17:32:32 crc kubenswrapper[4858]: I0202 17:32:32.666260 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5877769c8-jgqfs" event={"ID":"72c71bde-c7f7-4e51-955d-e9a808664d2a","Type":"ContainerDied","Data":"85a87db9ad2abb32a6ad3246910c58b499e468c4fb1bf4eb5959d93e2dd34a90"} Feb 02 17:32:32 crc kubenswrapper[4858]: I0202 17:32:32.666277 4858 scope.go:117] "RemoveContainer" containerID="21da8d326ef04a2cef068e5d8c2a31a29d7d76ead36bd218d9d80d6f6a1691a7" Feb 02 17:32:32 crc kubenswrapper[4858]: I0202 17:32:32.666314 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5877769c8-jgqfs" Feb 02 17:32:32 crc kubenswrapper[4858]: I0202 17:32:32.681959 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72c71bde-c7f7-4e51-955d-e9a808664d2a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "72c71bde-c7f7-4e51-955d-e9a808664d2a" (UID: "72c71bde-c7f7-4e51-955d-e9a808664d2a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:32 crc kubenswrapper[4858]: I0202 17:32:32.702736 4858 scope.go:117] "RemoveContainer" containerID="f11b6accc3bda352b71cd450a066557c0c3998983364e31d72eeee29df6e77d4" Feb 02 17:32:32 crc kubenswrapper[4858]: I0202 17:32:32.711679 4858 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/72c71bde-c7f7-4e51-955d-e9a808664d2a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:32 crc kubenswrapper[4858]: I0202 17:32:32.711708 4858 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/72c71bde-c7f7-4e51-955d-e9a808664d2a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:32 crc kubenswrapper[4858]: I0202 17:32:32.727232 4858 scope.go:117] "RemoveContainer" containerID="21da8d326ef04a2cef068e5d8c2a31a29d7d76ead36bd218d9d80d6f6a1691a7" Feb 02 17:32:32 crc kubenswrapper[4858]: E0202 17:32:32.727725 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21da8d326ef04a2cef068e5d8c2a31a29d7d76ead36bd218d9d80d6f6a1691a7\": container with ID starting with 21da8d326ef04a2cef068e5d8c2a31a29d7d76ead36bd218d9d80d6f6a1691a7 not found: ID does not exist" containerID="21da8d326ef04a2cef068e5d8c2a31a29d7d76ead36bd218d9d80d6f6a1691a7" Feb 02 17:32:32 crc kubenswrapper[4858]: I0202 17:32:32.727771 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21da8d326ef04a2cef068e5d8c2a31a29d7d76ead36bd218d9d80d6f6a1691a7"} err="failed to get container status \"21da8d326ef04a2cef068e5d8c2a31a29d7d76ead36bd218d9d80d6f6a1691a7\": rpc error: code = NotFound desc = could not find container \"21da8d326ef04a2cef068e5d8c2a31a29d7d76ead36bd218d9d80d6f6a1691a7\": container with ID starting with 21da8d326ef04a2cef068e5d8c2a31a29d7d76ead36bd218d9d80d6f6a1691a7 not found: ID does not exist" Feb 02 17:32:32 crc kubenswrapper[4858]: I0202 17:32:32.727798 4858 scope.go:117] "RemoveContainer" containerID="f11b6accc3bda352b71cd450a066557c0c3998983364e31d72eeee29df6e77d4" Feb 02 17:32:32 crc kubenswrapper[4858]: E0202 17:32:32.728238 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f11b6accc3bda352b71cd450a066557c0c3998983364e31d72eeee29df6e77d4\": container with ID starting with f11b6accc3bda352b71cd450a066557c0c3998983364e31d72eeee29df6e77d4 not found: ID does not exist" containerID="f11b6accc3bda352b71cd450a066557c0c3998983364e31d72eeee29df6e77d4" Feb 02 17:32:32 crc kubenswrapper[4858]: I0202 17:32:32.728266 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f11b6accc3bda352b71cd450a066557c0c3998983364e31d72eeee29df6e77d4"} err="failed to get container status \"f11b6accc3bda352b71cd450a066557c0c3998983364e31d72eeee29df6e77d4\": rpc error: code = NotFound desc = could not find container \"f11b6accc3bda352b71cd450a066557c0c3998983364e31d72eeee29df6e77d4\": container with ID starting with f11b6accc3bda352b71cd450a066557c0c3998983364e31d72eeee29df6e77d4 not found: ID does not exist" Feb 02 17:32:32 crc kubenswrapper[4858]: I0202 17:32:32.950281 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-7748685595-fdxjj"] Feb 02 17:32:32 crc kubenswrapper[4858]: E0202 17:32:32.950875 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72c71bde-c7f7-4e51-955d-e9a808664d2a" containerName="placement-log" Feb 02 17:32:32 crc kubenswrapper[4858]: I0202 17:32:32.950901 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="72c71bde-c7f7-4e51-955d-e9a808664d2a" containerName="placement-log" Feb 02 17:32:32 crc kubenswrapper[4858]: E0202 17:32:32.950956 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72c71bde-c7f7-4e51-955d-e9a808664d2a" containerName="placement-api" Feb 02 17:32:32 crc kubenswrapper[4858]: I0202 17:32:32.950966 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="72c71bde-c7f7-4e51-955d-e9a808664d2a" containerName="placement-api" Feb 02 17:32:32 crc kubenswrapper[4858]: I0202 17:32:32.951262 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="72c71bde-c7f7-4e51-955d-e9a808664d2a" containerName="placement-api" Feb 02 17:32:32 crc kubenswrapper[4858]: I0202 17:32:32.951296 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="72c71bde-c7f7-4e51-955d-e9a808664d2a" containerName="placement-log" Feb 02 17:32:32 crc kubenswrapper[4858]: I0202 17:32:32.952520 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7748685595-fdxjj" Feb 02 17:32:32 crc kubenswrapper[4858]: I0202 17:32:32.955538 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 02 17:32:32 crc kubenswrapper[4858]: I0202 17:32:32.959286 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 02 17:32:32 crc kubenswrapper[4858]: I0202 17:32:32.959477 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 02 17:32:32 crc kubenswrapper[4858]: I0202 17:32:32.985799 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7748685595-fdxjj"] Feb 02 17:32:33 crc kubenswrapper[4858]: I0202 17:32:33.019380 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cdd18b7-595d-4635-9a17-32be92896da1-combined-ca-bundle\") pod \"swift-proxy-7748685595-fdxjj\" (UID: \"6cdd18b7-595d-4635-9a17-32be92896da1\") " pod="openstack/swift-proxy-7748685595-fdxjj" Feb 02 17:32:33 crc kubenswrapper[4858]: I0202 17:32:33.019450 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6cdd18b7-595d-4635-9a17-32be92896da1-log-httpd\") pod \"swift-proxy-7748685595-fdxjj\" (UID: \"6cdd18b7-595d-4635-9a17-32be92896da1\") " pod="openstack/swift-proxy-7748685595-fdxjj" Feb 02 17:32:33 crc kubenswrapper[4858]: I0202 17:32:33.019539 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6cdd18b7-595d-4635-9a17-32be92896da1-run-httpd\") pod \"swift-proxy-7748685595-fdxjj\" (UID: \"6cdd18b7-595d-4635-9a17-32be92896da1\") " pod="openstack/swift-proxy-7748685595-fdxjj" Feb 02 17:32:33 crc kubenswrapper[4858]: I0202 17:32:33.019606 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6cdd18b7-595d-4635-9a17-32be92896da1-etc-swift\") pod \"swift-proxy-7748685595-fdxjj\" (UID: \"6cdd18b7-595d-4635-9a17-32be92896da1\") " pod="openstack/swift-proxy-7748685595-fdxjj" Feb 02 17:32:33 crc kubenswrapper[4858]: I0202 17:32:33.020216 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5877769c8-jgqfs"] Feb 02 17:32:33 crc kubenswrapper[4858]: I0202 17:32:33.021225 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cdd18b7-595d-4635-9a17-32be92896da1-public-tls-certs\") pod \"swift-proxy-7748685595-fdxjj\" (UID: \"6cdd18b7-595d-4635-9a17-32be92896da1\") " pod="openstack/swift-proxy-7748685595-fdxjj" Feb 02 17:32:33 crc kubenswrapper[4858]: I0202 17:32:33.021315 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cdd18b7-595d-4635-9a17-32be92896da1-internal-tls-certs\") pod \"swift-proxy-7748685595-fdxjj\" (UID: \"6cdd18b7-595d-4635-9a17-32be92896da1\") " pod="openstack/swift-proxy-7748685595-fdxjj" Feb 02 17:32:33 crc kubenswrapper[4858]: I0202 17:32:33.021358 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cdd18b7-595d-4635-9a17-32be92896da1-config-data\") pod \"swift-proxy-7748685595-fdxjj\" (UID: \"6cdd18b7-595d-4635-9a17-32be92896da1\") " pod="openstack/swift-proxy-7748685595-fdxjj" Feb 02 17:32:33 crc kubenswrapper[4858]: I0202 17:32:33.021393 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrbkh\" (UniqueName: \"kubernetes.io/projected/6cdd18b7-595d-4635-9a17-32be92896da1-kube-api-access-xrbkh\") pod \"swift-proxy-7748685595-fdxjj\" (UID: \"6cdd18b7-595d-4635-9a17-32be92896da1\") " pod="openstack/swift-proxy-7748685595-fdxjj" Feb 02 17:32:33 crc kubenswrapper[4858]: I0202 17:32:33.031621 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-5877769c8-jgqfs"] Feb 02 17:32:33 crc kubenswrapper[4858]: I0202 17:32:33.122567 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6cdd18b7-595d-4635-9a17-32be92896da1-run-httpd\") pod \"swift-proxy-7748685595-fdxjj\" (UID: \"6cdd18b7-595d-4635-9a17-32be92896da1\") " pod="openstack/swift-proxy-7748685595-fdxjj" Feb 02 17:32:33 crc kubenswrapper[4858]: I0202 17:32:33.122663 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6cdd18b7-595d-4635-9a17-32be92896da1-etc-swift\") pod \"swift-proxy-7748685595-fdxjj\" (UID: \"6cdd18b7-595d-4635-9a17-32be92896da1\") " pod="openstack/swift-proxy-7748685595-fdxjj" Feb 02 17:32:33 crc kubenswrapper[4858]: I0202 17:32:33.122718 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cdd18b7-595d-4635-9a17-32be92896da1-public-tls-certs\") pod \"swift-proxy-7748685595-fdxjj\" (UID: \"6cdd18b7-595d-4635-9a17-32be92896da1\") " pod="openstack/swift-proxy-7748685595-fdxjj" Feb 02 17:32:33 crc kubenswrapper[4858]: I0202 17:32:33.122767 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cdd18b7-595d-4635-9a17-32be92896da1-internal-tls-certs\") pod \"swift-proxy-7748685595-fdxjj\" (UID: \"6cdd18b7-595d-4635-9a17-32be92896da1\") " pod="openstack/swift-proxy-7748685595-fdxjj" Feb 02 17:32:33 crc kubenswrapper[4858]: I0202 17:32:33.122790 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cdd18b7-595d-4635-9a17-32be92896da1-config-data\") pod \"swift-proxy-7748685595-fdxjj\" (UID: \"6cdd18b7-595d-4635-9a17-32be92896da1\") " pod="openstack/swift-proxy-7748685595-fdxjj" Feb 02 17:32:33 crc kubenswrapper[4858]: I0202 17:32:33.122811 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrbkh\" (UniqueName: \"kubernetes.io/projected/6cdd18b7-595d-4635-9a17-32be92896da1-kube-api-access-xrbkh\") pod \"swift-proxy-7748685595-fdxjj\" (UID: \"6cdd18b7-595d-4635-9a17-32be92896da1\") " pod="openstack/swift-proxy-7748685595-fdxjj" Feb 02 17:32:33 crc kubenswrapper[4858]: I0202 17:32:33.122850 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cdd18b7-595d-4635-9a17-32be92896da1-combined-ca-bundle\") pod \"swift-proxy-7748685595-fdxjj\" (UID: \"6cdd18b7-595d-4635-9a17-32be92896da1\") " pod="openstack/swift-proxy-7748685595-fdxjj" Feb 02 17:32:33 crc kubenswrapper[4858]: I0202 17:32:33.122877 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6cdd18b7-595d-4635-9a17-32be92896da1-log-httpd\") pod \"swift-proxy-7748685595-fdxjj\" (UID: \"6cdd18b7-595d-4635-9a17-32be92896da1\") " pod="openstack/swift-proxy-7748685595-fdxjj" Feb 02 17:32:33 crc kubenswrapper[4858]: I0202 17:32:33.123460 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6cdd18b7-595d-4635-9a17-32be92896da1-log-httpd\") pod \"swift-proxy-7748685595-fdxjj\" (UID: \"6cdd18b7-595d-4635-9a17-32be92896da1\") " pod="openstack/swift-proxy-7748685595-fdxjj" Feb 02 17:32:33 crc kubenswrapper[4858]: I0202 17:32:33.123722 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6cdd18b7-595d-4635-9a17-32be92896da1-run-httpd\") pod \"swift-proxy-7748685595-fdxjj\" (UID: \"6cdd18b7-595d-4635-9a17-32be92896da1\") " pod="openstack/swift-proxy-7748685595-fdxjj" Feb 02 17:32:33 crc kubenswrapper[4858]: I0202 17:32:33.129632 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6cdd18b7-595d-4635-9a17-32be92896da1-etc-swift\") pod \"swift-proxy-7748685595-fdxjj\" (UID: \"6cdd18b7-595d-4635-9a17-32be92896da1\") " pod="openstack/swift-proxy-7748685595-fdxjj" Feb 02 17:32:33 crc kubenswrapper[4858]: I0202 17:32:33.130575 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cdd18b7-595d-4635-9a17-32be92896da1-internal-tls-certs\") pod \"swift-proxy-7748685595-fdxjj\" (UID: \"6cdd18b7-595d-4635-9a17-32be92896da1\") " pod="openstack/swift-proxy-7748685595-fdxjj" Feb 02 17:32:33 crc kubenswrapper[4858]: I0202 17:32:33.131710 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cdd18b7-595d-4635-9a17-32be92896da1-public-tls-certs\") pod \"swift-proxy-7748685595-fdxjj\" (UID: \"6cdd18b7-595d-4635-9a17-32be92896da1\") " pod="openstack/swift-proxy-7748685595-fdxjj" Feb 02 17:32:33 crc kubenswrapper[4858]: I0202 17:32:33.131804 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cdd18b7-595d-4635-9a17-32be92896da1-combined-ca-bundle\") pod \"swift-proxy-7748685595-fdxjj\" (UID: \"6cdd18b7-595d-4635-9a17-32be92896da1\") " pod="openstack/swift-proxy-7748685595-fdxjj" Feb 02 17:32:33 crc kubenswrapper[4858]: I0202 17:32:33.139399 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cdd18b7-595d-4635-9a17-32be92896da1-config-data\") pod \"swift-proxy-7748685595-fdxjj\" (UID: \"6cdd18b7-595d-4635-9a17-32be92896da1\") " pod="openstack/swift-proxy-7748685595-fdxjj" Feb 02 17:32:33 crc kubenswrapper[4858]: I0202 17:32:33.145922 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrbkh\" (UniqueName: \"kubernetes.io/projected/6cdd18b7-595d-4635-9a17-32be92896da1-kube-api-access-xrbkh\") pod \"swift-proxy-7748685595-fdxjj\" (UID: \"6cdd18b7-595d-4635-9a17-32be92896da1\") " pod="openstack/swift-proxy-7748685595-fdxjj" Feb 02 17:32:33 crc kubenswrapper[4858]: I0202 17:32:33.278415 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7748685595-fdxjj" Feb 02 17:32:33 crc kubenswrapper[4858]: I0202 17:32:33.395237 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:32:33 crc kubenswrapper[4858]: I0202 17:32:33.395561 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0fef11f2-a89c-48f7-b0e8-4ed3045e028e" containerName="ceilometer-central-agent" containerID="cri-o://e26be3701a03f118e90dda4c7d592d6669de54f17ca55b29332db9682ada7d29" gracePeriod=30 Feb 02 17:32:33 crc kubenswrapper[4858]: I0202 17:32:33.395683 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0fef11f2-a89c-48f7-b0e8-4ed3045e028e" containerName="proxy-httpd" containerID="cri-o://1d3b2ea7e207ffd24fbdac5d9c146dc0caf133d566370b056c89da13a1735044" gracePeriod=30 Feb 02 17:32:33 crc kubenswrapper[4858]: I0202 17:32:33.395716 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0fef11f2-a89c-48f7-b0e8-4ed3045e028e" containerName="sg-core" containerID="cri-o://707d883e2c04c4ab14c027ab3cb258e9d8a994ceaeffe7f21f4e2125d8fbfff8" gracePeriod=30 Feb 02 17:32:33 crc kubenswrapper[4858]: I0202 17:32:33.405005 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0fef11f2-a89c-48f7-b0e8-4ed3045e028e" containerName="ceilometer-notification-agent" containerID="cri-o://4e34b3499fffc2c2e1f3d30e9679c89a662027085d35cf523cfc5754d00a98d1" gracePeriod=30 Feb 02 17:32:33 crc kubenswrapper[4858]: I0202 17:32:33.408255 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 02 17:32:33 crc kubenswrapper[4858]: I0202 17:32:33.678782 4858 generic.go:334] "Generic (PLEG): container finished" podID="0fef11f2-a89c-48f7-b0e8-4ed3045e028e" containerID="1d3b2ea7e207ffd24fbdac5d9c146dc0caf133d566370b056c89da13a1735044" exitCode=0 Feb 02 17:32:33 crc kubenswrapper[4858]: I0202 17:32:33.678818 4858 generic.go:334] "Generic (PLEG): container finished" podID="0fef11f2-a89c-48f7-b0e8-4ed3045e028e" containerID="707d883e2c04c4ab14c027ab3cb258e9d8a994ceaeffe7f21f4e2125d8fbfff8" exitCode=2 Feb 02 17:32:33 crc kubenswrapper[4858]: I0202 17:32:33.678870 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fef11f2-a89c-48f7-b0e8-4ed3045e028e","Type":"ContainerDied","Data":"1d3b2ea7e207ffd24fbdac5d9c146dc0caf133d566370b056c89da13a1735044"} Feb 02 17:32:33 crc kubenswrapper[4858]: I0202 17:32:33.678916 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fef11f2-a89c-48f7-b0e8-4ed3045e028e","Type":"ContainerDied","Data":"707d883e2c04c4ab14c027ab3cb258e9d8a994ceaeffe7f21f4e2125d8fbfff8"} Feb 02 17:32:33 crc kubenswrapper[4858]: I0202 17:32:33.931474 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7748685595-fdxjj"] Feb 02 17:32:33 crc kubenswrapper[4858]: W0202 17:32:33.939736 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6cdd18b7_595d_4635_9a17_32be92896da1.slice/crio-50fa6097bd7b69fe5c2b823904170a206955692287fc5a7a304896ab95495853 WatchSource:0}: Error finding container 50fa6097bd7b69fe5c2b823904170a206955692287fc5a7a304896ab95495853: Status 404 returned error can't find the container with id 50fa6097bd7b69fe5c2b823904170a206955692287fc5a7a304896ab95495853 Feb 02 17:32:34 crc kubenswrapper[4858]: I0202 17:32:34.411175 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72c71bde-c7f7-4e51-955d-e9a808664d2a" path="/var/lib/kubelet/pods/72c71bde-c7f7-4e51-955d-e9a808664d2a/volumes" Feb 02 17:32:34 crc kubenswrapper[4858]: I0202 17:32:34.691954 4858 generic.go:334] "Generic (PLEG): container finished" podID="0fef11f2-a89c-48f7-b0e8-4ed3045e028e" containerID="e26be3701a03f118e90dda4c7d592d6669de54f17ca55b29332db9682ada7d29" exitCode=0 Feb 02 17:32:34 crc kubenswrapper[4858]: I0202 17:32:34.692006 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fef11f2-a89c-48f7-b0e8-4ed3045e028e","Type":"ContainerDied","Data":"e26be3701a03f118e90dda4c7d592d6669de54f17ca55b29332db9682ada7d29"} Feb 02 17:32:34 crc kubenswrapper[4858]: I0202 17:32:34.693632 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7748685595-fdxjj" event={"ID":"6cdd18b7-595d-4635-9a17-32be92896da1","Type":"ContainerStarted","Data":"7c38db440f17812d31f87817ba3e573ae88548d3833c512970593f6ef0f84599"} Feb 02 17:32:34 crc kubenswrapper[4858]: I0202 17:32:34.693668 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7748685595-fdxjj" event={"ID":"6cdd18b7-595d-4635-9a17-32be92896da1","Type":"ContainerStarted","Data":"50fa6097bd7b69fe5c2b823904170a206955692287fc5a7a304896ab95495853"} Feb 02 17:32:36 crc kubenswrapper[4858]: I0202 17:32:36.160414 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 02 17:32:36 crc kubenswrapper[4858]: I0202 17:32:36.578809 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-67p9g"] Feb 02 17:32:36 crc kubenswrapper[4858]: I0202 17:32:36.580189 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-67p9g" Feb 02 17:32:36 crc kubenswrapper[4858]: I0202 17:32:36.594036 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-67p9g"] Feb 02 17:32:36 crc kubenswrapper[4858]: I0202 17:32:36.710177 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvps6\" (UniqueName: \"kubernetes.io/projected/51908721-b3a6-4ecb-b0bc-041a43ecba5e-kube-api-access-cvps6\") pod \"nova-api-db-create-67p9g\" (UID: \"51908721-b3a6-4ecb-b0bc-041a43ecba5e\") " pod="openstack/nova-api-db-create-67p9g" Feb 02 17:32:36 crc kubenswrapper[4858]: I0202 17:32:36.710391 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51908721-b3a6-4ecb-b0bc-041a43ecba5e-operator-scripts\") pod \"nova-api-db-create-67p9g\" (UID: \"51908721-b3a6-4ecb-b0bc-041a43ecba5e\") " pod="openstack/nova-api-db-create-67p9g" Feb 02 17:32:36 crc kubenswrapper[4858]: I0202 17:32:36.794342 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-cpn8d"] Feb 02 17:32:36 crc kubenswrapper[4858]: I0202 17:32:36.797527 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cpn8d" Feb 02 17:32:36 crc kubenswrapper[4858]: I0202 17:32:36.813145 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvps6\" (UniqueName: \"kubernetes.io/projected/51908721-b3a6-4ecb-b0bc-041a43ecba5e-kube-api-access-cvps6\") pod \"nova-api-db-create-67p9g\" (UID: \"51908721-b3a6-4ecb-b0bc-041a43ecba5e\") " pod="openstack/nova-api-db-create-67p9g" Feb 02 17:32:36 crc kubenswrapper[4858]: I0202 17:32:36.813202 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51908721-b3a6-4ecb-b0bc-041a43ecba5e-operator-scripts\") pod \"nova-api-db-create-67p9g\" (UID: \"51908721-b3a6-4ecb-b0bc-041a43ecba5e\") " pod="openstack/nova-api-db-create-67p9g" Feb 02 17:32:36 crc kubenswrapper[4858]: I0202 17:32:36.813937 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51908721-b3a6-4ecb-b0bc-041a43ecba5e-operator-scripts\") pod \"nova-api-db-create-67p9g\" (UID: \"51908721-b3a6-4ecb-b0bc-041a43ecba5e\") " pod="openstack/nova-api-db-create-67p9g" Feb 02 17:32:36 crc kubenswrapper[4858]: I0202 17:32:36.827094 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-ce0f-account-create-update-cg8px"] Feb 02 17:32:36 crc kubenswrapper[4858]: I0202 17:32:36.828351 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ce0f-account-create-update-cg8px" Feb 02 17:32:36 crc kubenswrapper[4858]: I0202 17:32:36.831319 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 02 17:32:36 crc kubenswrapper[4858]: I0202 17:32:36.836610 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-cpn8d"] Feb 02 17:32:36 crc kubenswrapper[4858]: I0202 17:32:36.845159 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvps6\" (UniqueName: \"kubernetes.io/projected/51908721-b3a6-4ecb-b0bc-041a43ecba5e-kube-api-access-cvps6\") pod \"nova-api-db-create-67p9g\" (UID: \"51908721-b3a6-4ecb-b0bc-041a43ecba5e\") " pod="openstack/nova-api-db-create-67p9g" Feb 02 17:32:36 crc kubenswrapper[4858]: I0202 17:32:36.870139 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ce0f-account-create-update-cg8px"] Feb 02 17:32:36 crc kubenswrapper[4858]: I0202 17:32:36.896734 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-67p9g" Feb 02 17:32:36 crc kubenswrapper[4858]: I0202 17:32:36.911459 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-cqkx4"] Feb 02 17:32:36 crc kubenswrapper[4858]: I0202 17:32:36.912873 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-cqkx4" Feb 02 17:32:36 crc kubenswrapper[4858]: I0202 17:32:36.916564 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-447s8\" (UniqueName: \"kubernetes.io/projected/e6858894-d212-4bb0-a6dc-5e7633b29b58-kube-api-access-447s8\") pod \"nova-api-ce0f-account-create-update-cg8px\" (UID: \"e6858894-d212-4bb0-a6dc-5e7633b29b58\") " pod="openstack/nova-api-ce0f-account-create-update-cg8px" Feb 02 17:32:36 crc kubenswrapper[4858]: I0202 17:32:36.916638 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e58dfef0-aeb5-4f3d-bf54-f4c51cf88901-operator-scripts\") pod \"nova-cell0-db-create-cpn8d\" (UID: \"e58dfef0-aeb5-4f3d-bf54-f4c51cf88901\") " pod="openstack/nova-cell0-db-create-cpn8d" Feb 02 17:32:36 crc kubenswrapper[4858]: I0202 17:32:36.916850 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6858894-d212-4bb0-a6dc-5e7633b29b58-operator-scripts\") pod \"nova-api-ce0f-account-create-update-cg8px\" (UID: \"e6858894-d212-4bb0-a6dc-5e7633b29b58\") " pod="openstack/nova-api-ce0f-account-create-update-cg8px" Feb 02 17:32:36 crc kubenswrapper[4858]: I0202 17:32:36.917117 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl4pd\" (UniqueName: \"kubernetes.io/projected/e58dfef0-aeb5-4f3d-bf54-f4c51cf88901-kube-api-access-hl4pd\") pod \"nova-cell0-db-create-cpn8d\" (UID: \"e58dfef0-aeb5-4f3d-bf54-f4c51cf88901\") " pod="openstack/nova-cell0-db-create-cpn8d" Feb 02 17:32:36 crc kubenswrapper[4858]: I0202 17:32:36.938816 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-cqkx4"] Feb 02 17:32:37 crc kubenswrapper[4858]: I0202 17:32:37.014584 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-4698-account-create-update-lnpzg"] Feb 02 17:32:37 crc kubenswrapper[4858]: I0202 17:32:37.016216 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4698-account-create-update-lnpzg" Feb 02 17:32:37 crc kubenswrapper[4858]: I0202 17:32:37.018669 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e804b92-5b91-414c-ab96-2c679b264a85-operator-scripts\") pod \"nova-cell1-db-create-cqkx4\" (UID: \"9e804b92-5b91-414c-ab96-2c679b264a85\") " pod="openstack/nova-cell1-db-create-cqkx4" Feb 02 17:32:37 crc kubenswrapper[4858]: I0202 17:32:37.018794 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6858894-d212-4bb0-a6dc-5e7633b29b58-operator-scripts\") pod \"nova-api-ce0f-account-create-update-cg8px\" (UID: \"e6858894-d212-4bb0-a6dc-5e7633b29b58\") " pod="openstack/nova-api-ce0f-account-create-update-cg8px" Feb 02 17:32:37 crc kubenswrapper[4858]: I0202 17:32:37.018856 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 02 17:32:37 crc kubenswrapper[4858]: I0202 17:32:37.018870 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hl4pd\" (UniqueName: \"kubernetes.io/projected/e58dfef0-aeb5-4f3d-bf54-f4c51cf88901-kube-api-access-hl4pd\") pod \"nova-cell0-db-create-cpn8d\" (UID: \"e58dfef0-aeb5-4f3d-bf54-f4c51cf88901\") " pod="openstack/nova-cell0-db-create-cpn8d" Feb 02 17:32:37 crc kubenswrapper[4858]: I0202 17:32:37.018924 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-447s8\" (UniqueName: \"kubernetes.io/projected/e6858894-d212-4bb0-a6dc-5e7633b29b58-kube-api-access-447s8\") pod \"nova-api-ce0f-account-create-update-cg8px\" (UID: \"e6858894-d212-4bb0-a6dc-5e7633b29b58\") " pod="openstack/nova-api-ce0f-account-create-update-cg8px" Feb 02 17:32:37 crc kubenswrapper[4858]: I0202 17:32:37.018955 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffl56\" (UniqueName: \"kubernetes.io/projected/9e804b92-5b91-414c-ab96-2c679b264a85-kube-api-access-ffl56\") pod \"nova-cell1-db-create-cqkx4\" (UID: \"9e804b92-5b91-414c-ab96-2c679b264a85\") " pod="openstack/nova-cell1-db-create-cqkx4" Feb 02 17:32:37 crc kubenswrapper[4858]: I0202 17:32:37.018986 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e58dfef0-aeb5-4f3d-bf54-f4c51cf88901-operator-scripts\") pod \"nova-cell0-db-create-cpn8d\" (UID: \"e58dfef0-aeb5-4f3d-bf54-f4c51cf88901\") " pod="openstack/nova-cell0-db-create-cpn8d" Feb 02 17:32:37 crc kubenswrapper[4858]: I0202 17:32:37.019667 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e58dfef0-aeb5-4f3d-bf54-f4c51cf88901-operator-scripts\") pod \"nova-cell0-db-create-cpn8d\" (UID: \"e58dfef0-aeb5-4f3d-bf54-f4c51cf88901\") " pod="openstack/nova-cell0-db-create-cpn8d" Feb 02 17:32:37 crc kubenswrapper[4858]: I0202 17:32:37.020041 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6858894-d212-4bb0-a6dc-5e7633b29b58-operator-scripts\") pod \"nova-api-ce0f-account-create-update-cg8px\" (UID: \"e6858894-d212-4bb0-a6dc-5e7633b29b58\") " pod="openstack/nova-api-ce0f-account-create-update-cg8px" Feb 02 17:32:37 crc kubenswrapper[4858]: I0202 17:32:37.027664 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-4698-account-create-update-lnpzg"] Feb 02 17:32:37 crc kubenswrapper[4858]: I0202 17:32:37.039390 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl4pd\" (UniqueName: \"kubernetes.io/projected/e58dfef0-aeb5-4f3d-bf54-f4c51cf88901-kube-api-access-hl4pd\") pod \"nova-cell0-db-create-cpn8d\" (UID: \"e58dfef0-aeb5-4f3d-bf54-f4c51cf88901\") " pod="openstack/nova-cell0-db-create-cpn8d" Feb 02 17:32:37 crc kubenswrapper[4858]: I0202 17:32:37.058276 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-447s8\" (UniqueName: \"kubernetes.io/projected/e6858894-d212-4bb0-a6dc-5e7633b29b58-kube-api-access-447s8\") pod \"nova-api-ce0f-account-create-update-cg8px\" (UID: \"e6858894-d212-4bb0-a6dc-5e7633b29b58\") " pod="openstack/nova-api-ce0f-account-create-update-cg8px" Feb 02 17:32:37 crc kubenswrapper[4858]: I0202 17:32:37.120731 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffl56\" (UniqueName: \"kubernetes.io/projected/9e804b92-5b91-414c-ab96-2c679b264a85-kube-api-access-ffl56\") pod \"nova-cell1-db-create-cqkx4\" (UID: \"9e804b92-5b91-414c-ab96-2c679b264a85\") " pod="openstack/nova-cell1-db-create-cqkx4" Feb 02 17:32:37 crc kubenswrapper[4858]: I0202 17:32:37.121011 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e804b92-5b91-414c-ab96-2c679b264a85-operator-scripts\") pod \"nova-cell1-db-create-cqkx4\" (UID: \"9e804b92-5b91-414c-ab96-2c679b264a85\") " pod="openstack/nova-cell1-db-create-cqkx4" Feb 02 17:32:37 crc kubenswrapper[4858]: I0202 17:32:37.121141 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cpn8d" Feb 02 17:32:37 crc kubenswrapper[4858]: I0202 17:32:37.121241 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67jxp\" (UniqueName: \"kubernetes.io/projected/d0fd6b61-532c-4002-bc57-c692aa8255f2-kube-api-access-67jxp\") pod \"nova-cell0-4698-account-create-update-lnpzg\" (UID: \"d0fd6b61-532c-4002-bc57-c692aa8255f2\") " pod="openstack/nova-cell0-4698-account-create-update-lnpzg" Feb 02 17:32:37 crc kubenswrapper[4858]: I0202 17:32:37.121364 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0fd6b61-532c-4002-bc57-c692aa8255f2-operator-scripts\") pod \"nova-cell0-4698-account-create-update-lnpzg\" (UID: \"d0fd6b61-532c-4002-bc57-c692aa8255f2\") " pod="openstack/nova-cell0-4698-account-create-update-lnpzg" Feb 02 17:32:37 crc kubenswrapper[4858]: I0202 17:32:37.122054 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e804b92-5b91-414c-ab96-2c679b264a85-operator-scripts\") pod \"nova-cell1-db-create-cqkx4\" (UID: \"9e804b92-5b91-414c-ab96-2c679b264a85\") " pod="openstack/nova-cell1-db-create-cqkx4" Feb 02 17:32:37 crc kubenswrapper[4858]: I0202 17:32:37.139613 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffl56\" (UniqueName: \"kubernetes.io/projected/9e804b92-5b91-414c-ab96-2c679b264a85-kube-api-access-ffl56\") pod \"nova-cell1-db-create-cqkx4\" (UID: \"9e804b92-5b91-414c-ab96-2c679b264a85\") " pod="openstack/nova-cell1-db-create-cqkx4" Feb 02 17:32:37 crc kubenswrapper[4858]: I0202 17:32:37.192618 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ce0f-account-create-update-cg8px" Feb 02 17:32:37 crc kubenswrapper[4858]: I0202 17:32:37.205919 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-45eb-account-create-update-hxbqj"] Feb 02 17:32:37 crc kubenswrapper[4858]: I0202 17:32:37.207415 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-45eb-account-create-update-hxbqj" Feb 02 17:32:37 crc kubenswrapper[4858]: I0202 17:32:37.210682 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 02 17:32:37 crc kubenswrapper[4858]: I0202 17:32:37.220696 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-45eb-account-create-update-hxbqj"] Feb 02 17:32:37 crc kubenswrapper[4858]: I0202 17:32:37.225060 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67jxp\" (UniqueName: \"kubernetes.io/projected/d0fd6b61-532c-4002-bc57-c692aa8255f2-kube-api-access-67jxp\") pod \"nova-cell0-4698-account-create-update-lnpzg\" (UID: \"d0fd6b61-532c-4002-bc57-c692aa8255f2\") " pod="openstack/nova-cell0-4698-account-create-update-lnpzg" Feb 02 17:32:37 crc kubenswrapper[4858]: I0202 17:32:37.225607 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0fd6b61-532c-4002-bc57-c692aa8255f2-operator-scripts\") pod \"nova-cell0-4698-account-create-update-lnpzg\" (UID: \"d0fd6b61-532c-4002-bc57-c692aa8255f2\") " pod="openstack/nova-cell0-4698-account-create-update-lnpzg" Feb 02 17:32:37 crc kubenswrapper[4858]: I0202 17:32:37.226425 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0fd6b61-532c-4002-bc57-c692aa8255f2-operator-scripts\") pod \"nova-cell0-4698-account-create-update-lnpzg\" (UID: \"d0fd6b61-532c-4002-bc57-c692aa8255f2\") " pod="openstack/nova-cell0-4698-account-create-update-lnpzg" Feb 02 17:32:37 crc kubenswrapper[4858]: I0202 17:32:37.241669 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-cqkx4" Feb 02 17:32:37 crc kubenswrapper[4858]: I0202 17:32:37.248838 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67jxp\" (UniqueName: \"kubernetes.io/projected/d0fd6b61-532c-4002-bc57-c692aa8255f2-kube-api-access-67jxp\") pod \"nova-cell0-4698-account-create-update-lnpzg\" (UID: \"d0fd6b61-532c-4002-bc57-c692aa8255f2\") " pod="openstack/nova-cell0-4698-account-create-update-lnpzg" Feb 02 17:32:37 crc kubenswrapper[4858]: I0202 17:32:37.327572 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59t5v\" (UniqueName: \"kubernetes.io/projected/b3e75f5b-d129-4f48-b69c-35f0fd329c2b-kube-api-access-59t5v\") pod \"nova-cell1-45eb-account-create-update-hxbqj\" (UID: \"b3e75f5b-d129-4f48-b69c-35f0fd329c2b\") " pod="openstack/nova-cell1-45eb-account-create-update-hxbqj" Feb 02 17:32:37 crc kubenswrapper[4858]: I0202 17:32:37.327662 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3e75f5b-d129-4f48-b69c-35f0fd329c2b-operator-scripts\") pod \"nova-cell1-45eb-account-create-update-hxbqj\" (UID: \"b3e75f5b-d129-4f48-b69c-35f0fd329c2b\") " pod="openstack/nova-cell1-45eb-account-create-update-hxbqj" Feb 02 17:32:37 crc kubenswrapper[4858]: I0202 17:32:37.433504 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3e75f5b-d129-4f48-b69c-35f0fd329c2b-operator-scripts\") pod \"nova-cell1-45eb-account-create-update-hxbqj\" (UID: \"b3e75f5b-d129-4f48-b69c-35f0fd329c2b\") " pod="openstack/nova-cell1-45eb-account-create-update-hxbqj" Feb 02 17:32:37 crc kubenswrapper[4858]: I0202 17:32:37.434296 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3e75f5b-d129-4f48-b69c-35f0fd329c2b-operator-scripts\") pod \"nova-cell1-45eb-account-create-update-hxbqj\" (UID: \"b3e75f5b-d129-4f48-b69c-35f0fd329c2b\") " pod="openstack/nova-cell1-45eb-account-create-update-hxbqj" Feb 02 17:32:37 crc kubenswrapper[4858]: I0202 17:32:37.434819 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59t5v\" (UniqueName: \"kubernetes.io/projected/b3e75f5b-d129-4f48-b69c-35f0fd329c2b-kube-api-access-59t5v\") pod \"nova-cell1-45eb-account-create-update-hxbqj\" (UID: \"b3e75f5b-d129-4f48-b69c-35f0fd329c2b\") " pod="openstack/nova-cell1-45eb-account-create-update-hxbqj" Feb 02 17:32:37 crc kubenswrapper[4858]: I0202 17:32:37.435420 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4698-account-create-update-lnpzg" Feb 02 17:32:37 crc kubenswrapper[4858]: I0202 17:32:37.452602 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59t5v\" (UniqueName: \"kubernetes.io/projected/b3e75f5b-d129-4f48-b69c-35f0fd329c2b-kube-api-access-59t5v\") pod \"nova-cell1-45eb-account-create-update-hxbqj\" (UID: \"b3e75f5b-d129-4f48-b69c-35f0fd329c2b\") " pod="openstack/nova-cell1-45eb-account-create-update-hxbqj" Feb 02 17:32:37 crc kubenswrapper[4858]: I0202 17:32:37.534742 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-45eb-account-create-update-hxbqj" Feb 02 17:32:37 crc kubenswrapper[4858]: I0202 17:32:37.729901 4858 generic.go:334] "Generic (PLEG): container finished" podID="0fef11f2-a89c-48f7-b0e8-4ed3045e028e" containerID="4e34b3499fffc2c2e1f3d30e9679c89a662027085d35cf523cfc5754d00a98d1" exitCode=0 Feb 02 17:32:37 crc kubenswrapper[4858]: I0202 17:32:37.729966 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fef11f2-a89c-48f7-b0e8-4ed3045e028e","Type":"ContainerDied","Data":"4e34b3499fffc2c2e1f3d30e9679c89a662027085d35cf523cfc5754d00a98d1"} Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.116577 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.187120 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fef11f2-a89c-48f7-b0e8-4ed3045e028e-scripts\") pod \"0fef11f2-a89c-48f7-b0e8-4ed3045e028e\" (UID: \"0fef11f2-a89c-48f7-b0e8-4ed3045e028e\") " Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.187491 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fef11f2-a89c-48f7-b0e8-4ed3045e028e-combined-ca-bundle\") pod \"0fef11f2-a89c-48f7-b0e8-4ed3045e028e\" (UID: \"0fef11f2-a89c-48f7-b0e8-4ed3045e028e\") " Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.187539 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fef11f2-a89c-48f7-b0e8-4ed3045e028e-run-httpd\") pod \"0fef11f2-a89c-48f7-b0e8-4ed3045e028e\" (UID: \"0fef11f2-a89c-48f7-b0e8-4ed3045e028e\") " Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.187559 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0fef11f2-a89c-48f7-b0e8-4ed3045e028e-sg-core-conf-yaml\") pod \"0fef11f2-a89c-48f7-b0e8-4ed3045e028e\" (UID: \"0fef11f2-a89c-48f7-b0e8-4ed3045e028e\") " Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.187605 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fef11f2-a89c-48f7-b0e8-4ed3045e028e-log-httpd\") pod \"0fef11f2-a89c-48f7-b0e8-4ed3045e028e\" (UID: \"0fef11f2-a89c-48f7-b0e8-4ed3045e028e\") " Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.187633 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qz4ms\" (UniqueName: \"kubernetes.io/projected/0fef11f2-a89c-48f7-b0e8-4ed3045e028e-kube-api-access-qz4ms\") pod \"0fef11f2-a89c-48f7-b0e8-4ed3045e028e\" (UID: \"0fef11f2-a89c-48f7-b0e8-4ed3045e028e\") " Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.187665 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fef11f2-a89c-48f7-b0e8-4ed3045e028e-config-data\") pod \"0fef11f2-a89c-48f7-b0e8-4ed3045e028e\" (UID: \"0fef11f2-a89c-48f7-b0e8-4ed3045e028e\") " Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.188694 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fef11f2-a89c-48f7-b0e8-4ed3045e028e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0fef11f2-a89c-48f7-b0e8-4ed3045e028e" (UID: "0fef11f2-a89c-48f7-b0e8-4ed3045e028e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.189387 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fef11f2-a89c-48f7-b0e8-4ed3045e028e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0fef11f2-a89c-48f7-b0e8-4ed3045e028e" (UID: "0fef11f2-a89c-48f7-b0e8-4ed3045e028e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.207417 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fef11f2-a89c-48f7-b0e8-4ed3045e028e-scripts" (OuterVolumeSpecName: "scripts") pod "0fef11f2-a89c-48f7-b0e8-4ed3045e028e" (UID: "0fef11f2-a89c-48f7-b0e8-4ed3045e028e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.209775 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fef11f2-a89c-48f7-b0e8-4ed3045e028e-kube-api-access-qz4ms" (OuterVolumeSpecName: "kube-api-access-qz4ms") pod "0fef11f2-a89c-48f7-b0e8-4ed3045e028e" (UID: "0fef11f2-a89c-48f7-b0e8-4ed3045e028e"). InnerVolumeSpecName "kube-api-access-qz4ms". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.228110 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-857c87669d-c45h7" podUID="24d5a090-abc7-4832-b6c6-2e36edf7d82e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.252341 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-67p9g"] Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.290251 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fef11f2-a89c-48f7-b0e8-4ed3045e028e-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.290283 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qz4ms\" (UniqueName: \"kubernetes.io/projected/0fef11f2-a89c-48f7-b0e8-4ed3045e028e-kube-api-access-qz4ms\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.290292 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fef11f2-a89c-48f7-b0e8-4ed3045e028e-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.290300 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fef11f2-a89c-48f7-b0e8-4ed3045e028e-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.543100 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-cqkx4"] Feb 02 17:32:40 crc kubenswrapper[4858]: W0202 17:32:40.544359 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e804b92_5b91_414c_ab96_2c679b264a85.slice/crio-1b5d9c7ca02a3a937d2d6de479dc3f611a76c7466fbb631762d163bebbc6f7f5 WatchSource:0}: Error finding container 1b5d9c7ca02a3a937d2d6de479dc3f611a76c7466fbb631762d163bebbc6f7f5: Status 404 returned error can't find the container with id 1b5d9c7ca02a3a937d2d6de479dc3f611a76c7466fbb631762d163bebbc6f7f5 Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.548065 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-cpn8d"] Feb 02 17:32:40 crc kubenswrapper[4858]: W0202 17:32:40.561848 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode58dfef0_aeb5_4f3d_bf54_f4c51cf88901.slice/crio-b2f8b1634bce4aec88aad8e0286a6d4a4c158537e27b532751fe236a9dd8aa41 WatchSource:0}: Error finding container b2f8b1634bce4aec88aad8e0286a6d4a4c158537e27b532751fe236a9dd8aa41: Status 404 returned error can't find the container with id b2f8b1634bce4aec88aad8e0286a6d4a4c158537e27b532751fe236a9dd8aa41 Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.584963 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fef11f2-a89c-48f7-b0e8-4ed3045e028e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0fef11f2-a89c-48f7-b0e8-4ed3045e028e" (UID: "0fef11f2-a89c-48f7-b0e8-4ed3045e028e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.603933 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0fef11f2-a89c-48f7-b0e8-4ed3045e028e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.630069 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fef11f2-a89c-48f7-b0e8-4ed3045e028e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0fef11f2-a89c-48f7-b0e8-4ed3045e028e" (UID: "0fef11f2-a89c-48f7-b0e8-4ed3045e028e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.685906 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fef11f2-a89c-48f7-b0e8-4ed3045e028e-config-data" (OuterVolumeSpecName: "config-data") pod "0fef11f2-a89c-48f7-b0e8-4ed3045e028e" (UID: "0fef11f2-a89c-48f7-b0e8-4ed3045e028e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.707052 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fef11f2-a89c-48f7-b0e8-4ed3045e028e-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.707091 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fef11f2-a89c-48f7-b0e8-4ed3045e028e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.755336 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ce0f-account-create-update-cg8px"] Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.772461 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-45eb-account-create-update-hxbqj"] Feb 02 17:32:40 crc kubenswrapper[4858]: W0202 17:32:40.775386 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6858894_d212_4bb0_a6dc_5e7633b29b58.slice/crio-d2d28623d44b008c6ca17b7bf3636445861e91ff5b2b1064e087a38a2551a0a3 WatchSource:0}: Error finding container d2d28623d44b008c6ca17b7bf3636445861e91ff5b2b1064e087a38a2551a0a3: Status 404 returned error can't find the container with id d2d28623d44b008c6ca17b7bf3636445861e91ff5b2b1064e087a38a2551a0a3 Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.785126 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-4698-account-create-update-lnpzg"] Feb 02 17:32:40 crc kubenswrapper[4858]: W0202 17:32:40.788318 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0fd6b61_532c_4002_bc57_c692aa8255f2.slice/crio-b0e9064f21d7dd60dd24c423c8bb770f711abf4e0ad650a2aef2b0eb54d1bf1d WatchSource:0}: Error finding container b0e9064f21d7dd60dd24c423c8bb770f711abf4e0ad650a2aef2b0eb54d1bf1d: Status 404 returned error can't find the container with id b0e9064f21d7dd60dd24c423c8bb770f711abf4e0ad650a2aef2b0eb54d1bf1d Feb 02 17:32:40 crc kubenswrapper[4858]: W0202 17:32:40.788613 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3e75f5b_d129_4f48_b69c_35f0fd329c2b.slice/crio-8e1b1075a9b86d82761be189f49a76ba10074841d83581cce8305d356972374a WatchSource:0}: Error finding container 8e1b1075a9b86d82761be189f49a76ba10074841d83581cce8305d356972374a: Status 404 returned error can't find the container with id 8e1b1075a9b86d82761be189f49a76ba10074841d83581cce8305d356972374a Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.792294 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fef11f2-a89c-48f7-b0e8-4ed3045e028e","Type":"ContainerDied","Data":"3ea822e7c5b19934b295eb6a1203209e64f2757a03d69a65bb08085b1b7cd3e5"} Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.792350 4858 scope.go:117] "RemoveContainer" containerID="1d3b2ea7e207ffd24fbdac5d9c146dc0caf133d566370b056c89da13a1735044" Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.792510 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.799408 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"d0882d39-e033-4ce8-8b09-76d55e1c281c","Type":"ContainerStarted","Data":"6b3558b1b1de0fb905b04b7b5c849b46a16289924e47f5906f790e2ff77b2f78"} Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.809304 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-67p9g" event={"ID":"51908721-b3a6-4ecb-b0bc-041a43ecba5e","Type":"ContainerStarted","Data":"4aeb299b4aee2efe0c1e1e3176138562df162c93b53a75144f200e250528597e"} Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.809349 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-67p9g" event={"ID":"51908721-b3a6-4ecb-b0bc-041a43ecba5e","Type":"ContainerStarted","Data":"77df6881c8cb19069dbe70ee88d5aa446091814c10f384b574632467f8ba6b4d"} Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.815896 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-cqkx4" event={"ID":"9e804b92-5b91-414c-ab96-2c679b264a85","Type":"ContainerStarted","Data":"1b5d9c7ca02a3a937d2d6de479dc3f611a76c7466fbb631762d163bebbc6f7f5"} Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.826819 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7748685595-fdxjj" event={"ID":"6cdd18b7-595d-4635-9a17-32be92896da1","Type":"ContainerStarted","Data":"0667ae615f2699c0c34166db1e11799a2d54bf1f2e2c22230230413c105f7289"} Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.827869 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7748685595-fdxjj" Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.828045 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7748685595-fdxjj" Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.833319 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-cpn8d" event={"ID":"e58dfef0-aeb5-4f3d-bf54-f4c51cf88901","Type":"ContainerStarted","Data":"b2f8b1634bce4aec88aad8e0286a6d4a4c158537e27b532751fe236a9dd8aa41"} Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.837443 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.763104557 podStartE2EDuration="14.837420605s" podCreationTimestamp="2026-02-02 17:32:26 +0000 UTC" firstStartedPulling="2026-02-02 17:32:27.778431987 +0000 UTC m=+1048.930847252" lastFinishedPulling="2026-02-02 17:32:39.852748035 +0000 UTC m=+1061.005163300" observedRunningTime="2026-02-02 17:32:40.826488402 +0000 UTC m=+1061.978903667" watchObservedRunningTime="2026-02-02 17:32:40.837420605 +0000 UTC m=+1061.989835870" Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.845840 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-7748685595-fdxjj" podUID="6cdd18b7-595d-4635-9a17-32be92896da1" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.855987 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-67p9g" podStartSLOduration=4.855950934 podStartE2EDuration="4.855950934s" podCreationTimestamp="2026-02-02 17:32:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:32:40.848995665 +0000 UTC m=+1062.001410930" watchObservedRunningTime="2026-02-02 17:32:40.855950934 +0000 UTC m=+1062.008366199" Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.889484 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-7748685595-fdxjj" podStartSLOduration=8.889460922 podStartE2EDuration="8.889460922s" podCreationTimestamp="2026-02-02 17:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:32:40.867548426 +0000 UTC m=+1062.019963691" watchObservedRunningTime="2026-02-02 17:32:40.889460922 +0000 UTC m=+1062.041876197" Feb 02 17:32:40 crc kubenswrapper[4858]: I0202 17:32:40.902186 4858 scope.go:117] "RemoveContainer" containerID="707d883e2c04c4ab14c027ab3cb258e9d8a994ceaeffe7f21f4e2125d8fbfff8" Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.093696 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.097376 4858 scope.go:117] "RemoveContainer" containerID="4e34b3499fffc2c2e1f3d30e9679c89a662027085d35cf523cfc5754d00a98d1" Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.118943 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.132243 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:32:41 crc kubenswrapper[4858]: E0202 17:32:41.132645 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fef11f2-a89c-48f7-b0e8-4ed3045e028e" containerName="ceilometer-notification-agent" Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.132661 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fef11f2-a89c-48f7-b0e8-4ed3045e028e" containerName="ceilometer-notification-agent" Feb 02 17:32:41 crc kubenswrapper[4858]: E0202 17:32:41.132688 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fef11f2-a89c-48f7-b0e8-4ed3045e028e" containerName="ceilometer-central-agent" Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.132696 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fef11f2-a89c-48f7-b0e8-4ed3045e028e" containerName="ceilometer-central-agent" Feb 02 17:32:41 crc kubenswrapper[4858]: E0202 17:32:41.132716 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fef11f2-a89c-48f7-b0e8-4ed3045e028e" containerName="proxy-httpd" Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.132722 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fef11f2-a89c-48f7-b0e8-4ed3045e028e" containerName="proxy-httpd" Feb 02 17:32:41 crc kubenswrapper[4858]: E0202 17:32:41.132733 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fef11f2-a89c-48f7-b0e8-4ed3045e028e" containerName="sg-core" Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.132739 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fef11f2-a89c-48f7-b0e8-4ed3045e028e" containerName="sg-core" Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.132906 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fef11f2-a89c-48f7-b0e8-4ed3045e028e" containerName="proxy-httpd" Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.132922 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fef11f2-a89c-48f7-b0e8-4ed3045e028e" containerName="ceilometer-notification-agent" Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.132934 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fef11f2-a89c-48f7-b0e8-4ed3045e028e" containerName="ceilometer-central-agent" Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.132951 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fef11f2-a89c-48f7-b0e8-4ed3045e028e" containerName="sg-core" Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.134697 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.139224 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.139631 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.157122 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.162097 4858 scope.go:117] "RemoveContainer" containerID="e26be3701a03f118e90dda4c7d592d6669de54f17ca55b29332db9682ada7d29" Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.322687 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac5f73e4-c510-46a1-a0a5-1f8291a58339-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ac5f73e4-c510-46a1-a0a5-1f8291a58339\") " pod="openstack/ceilometer-0" Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.322739 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2p9b\" (UniqueName: \"kubernetes.io/projected/ac5f73e4-c510-46a1-a0a5-1f8291a58339-kube-api-access-x2p9b\") pod \"ceilometer-0\" (UID: \"ac5f73e4-c510-46a1-a0a5-1f8291a58339\") " pod="openstack/ceilometer-0" Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.322797 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac5f73e4-c510-46a1-a0a5-1f8291a58339-run-httpd\") pod \"ceilometer-0\" (UID: \"ac5f73e4-c510-46a1-a0a5-1f8291a58339\") " pod="openstack/ceilometer-0" Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.322827 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac5f73e4-c510-46a1-a0a5-1f8291a58339-config-data\") pod \"ceilometer-0\" (UID: \"ac5f73e4-c510-46a1-a0a5-1f8291a58339\") " pod="openstack/ceilometer-0" Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.322853 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac5f73e4-c510-46a1-a0a5-1f8291a58339-log-httpd\") pod \"ceilometer-0\" (UID: \"ac5f73e4-c510-46a1-a0a5-1f8291a58339\") " pod="openstack/ceilometer-0" Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.322882 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac5f73e4-c510-46a1-a0a5-1f8291a58339-scripts\") pod \"ceilometer-0\" (UID: \"ac5f73e4-c510-46a1-a0a5-1f8291a58339\") " pod="openstack/ceilometer-0" Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.322924 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac5f73e4-c510-46a1-a0a5-1f8291a58339-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ac5f73e4-c510-46a1-a0a5-1f8291a58339\") " pod="openstack/ceilometer-0" Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.424207 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac5f73e4-c510-46a1-a0a5-1f8291a58339-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ac5f73e4-c510-46a1-a0a5-1f8291a58339\") " pod="openstack/ceilometer-0" Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.424257 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2p9b\" (UniqueName: \"kubernetes.io/projected/ac5f73e4-c510-46a1-a0a5-1f8291a58339-kube-api-access-x2p9b\") pod \"ceilometer-0\" (UID: \"ac5f73e4-c510-46a1-a0a5-1f8291a58339\") " pod="openstack/ceilometer-0" Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.424285 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac5f73e4-c510-46a1-a0a5-1f8291a58339-run-httpd\") pod \"ceilometer-0\" (UID: \"ac5f73e4-c510-46a1-a0a5-1f8291a58339\") " pod="openstack/ceilometer-0" Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.424464 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac5f73e4-c510-46a1-a0a5-1f8291a58339-config-data\") pod \"ceilometer-0\" (UID: \"ac5f73e4-c510-46a1-a0a5-1f8291a58339\") " pod="openstack/ceilometer-0" Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.424488 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac5f73e4-c510-46a1-a0a5-1f8291a58339-log-httpd\") pod \"ceilometer-0\" (UID: \"ac5f73e4-c510-46a1-a0a5-1f8291a58339\") " pod="openstack/ceilometer-0" Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.424506 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac5f73e4-c510-46a1-a0a5-1f8291a58339-scripts\") pod \"ceilometer-0\" (UID: \"ac5f73e4-c510-46a1-a0a5-1f8291a58339\") " pod="openstack/ceilometer-0" Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.424542 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac5f73e4-c510-46a1-a0a5-1f8291a58339-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ac5f73e4-c510-46a1-a0a5-1f8291a58339\") " pod="openstack/ceilometer-0" Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.425005 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac5f73e4-c510-46a1-a0a5-1f8291a58339-run-httpd\") pod \"ceilometer-0\" (UID: \"ac5f73e4-c510-46a1-a0a5-1f8291a58339\") " pod="openstack/ceilometer-0" Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.425114 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac5f73e4-c510-46a1-a0a5-1f8291a58339-log-httpd\") pod \"ceilometer-0\" (UID: \"ac5f73e4-c510-46a1-a0a5-1f8291a58339\") " pod="openstack/ceilometer-0" Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.430496 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac5f73e4-c510-46a1-a0a5-1f8291a58339-scripts\") pod \"ceilometer-0\" (UID: \"ac5f73e4-c510-46a1-a0a5-1f8291a58339\") " pod="openstack/ceilometer-0" Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.430712 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac5f73e4-c510-46a1-a0a5-1f8291a58339-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ac5f73e4-c510-46a1-a0a5-1f8291a58339\") " pod="openstack/ceilometer-0" Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.433355 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac5f73e4-c510-46a1-a0a5-1f8291a58339-config-data\") pod \"ceilometer-0\" (UID: \"ac5f73e4-c510-46a1-a0a5-1f8291a58339\") " pod="openstack/ceilometer-0" Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.438410 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac5f73e4-c510-46a1-a0a5-1f8291a58339-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ac5f73e4-c510-46a1-a0a5-1f8291a58339\") " pod="openstack/ceilometer-0" Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.444426 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2p9b\" (UniqueName: \"kubernetes.io/projected/ac5f73e4-c510-46a1-a0a5-1f8291a58339-kube-api-access-x2p9b\") pod \"ceilometer-0\" (UID: \"ac5f73e4-c510-46a1-a0a5-1f8291a58339\") " pod="openstack/ceilometer-0" Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.481724 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.854886 4858 generic.go:334] "Generic (PLEG): container finished" podID="e58dfef0-aeb5-4f3d-bf54-f4c51cf88901" containerID="c01d8736e6ad4d4fa323c695f4a715a9e645fd67373e42610fb1e528996c6c52" exitCode=0 Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.855131 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-cpn8d" event={"ID":"e58dfef0-aeb5-4f3d-bf54-f4c51cf88901","Type":"ContainerDied","Data":"c01d8736e6ad4d4fa323c695f4a715a9e645fd67373e42610fb1e528996c6c52"} Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.867435 4858 generic.go:334] "Generic (PLEG): container finished" podID="d0fd6b61-532c-4002-bc57-c692aa8255f2" containerID="c45882112a1870e7e71e994b87b8c98eb8d51d39e7898a8eea14ae128a4432e6" exitCode=0 Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.867524 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4698-account-create-update-lnpzg" event={"ID":"d0fd6b61-532c-4002-bc57-c692aa8255f2","Type":"ContainerDied","Data":"c45882112a1870e7e71e994b87b8c98eb8d51d39e7898a8eea14ae128a4432e6"} Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.867553 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4698-account-create-update-lnpzg" event={"ID":"d0fd6b61-532c-4002-bc57-c692aa8255f2","Type":"ContainerStarted","Data":"b0e9064f21d7dd60dd24c423c8bb770f711abf4e0ad650a2aef2b0eb54d1bf1d"} Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.874273 4858 generic.go:334] "Generic (PLEG): container finished" podID="e6858894-d212-4bb0-a6dc-5e7633b29b58" containerID="2878974141f988a25e49a2f0c4142ca6a5bd797937934fedb077c198f93dbc6a" exitCode=0 Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.874345 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ce0f-account-create-update-cg8px" event={"ID":"e6858894-d212-4bb0-a6dc-5e7633b29b58","Type":"ContainerDied","Data":"2878974141f988a25e49a2f0c4142ca6a5bd797937934fedb077c198f93dbc6a"} Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.874372 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ce0f-account-create-update-cg8px" event={"ID":"e6858894-d212-4bb0-a6dc-5e7633b29b58","Type":"ContainerStarted","Data":"d2d28623d44b008c6ca17b7bf3636445861e91ff5b2b1064e087a38a2551a0a3"} Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.881694 4858 generic.go:334] "Generic (PLEG): container finished" podID="b3e75f5b-d129-4f48-b69c-35f0fd329c2b" containerID="75fa19b9d59b8c498381b434052f49ac0ecf7474e337b035a736a24ac53adfcb" exitCode=0 Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.881792 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-45eb-account-create-update-hxbqj" event={"ID":"b3e75f5b-d129-4f48-b69c-35f0fd329c2b","Type":"ContainerDied","Data":"75fa19b9d59b8c498381b434052f49ac0ecf7474e337b035a736a24ac53adfcb"} Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.881816 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-45eb-account-create-update-hxbqj" event={"ID":"b3e75f5b-d129-4f48-b69c-35f0fd329c2b","Type":"ContainerStarted","Data":"8e1b1075a9b86d82761be189f49a76ba10074841d83581cce8305d356972374a"} Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.887388 4858 generic.go:334] "Generic (PLEG): container finished" podID="51908721-b3a6-4ecb-b0bc-041a43ecba5e" containerID="4aeb299b4aee2efe0c1e1e3176138562df162c93b53a75144f200e250528597e" exitCode=0 Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.887475 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-67p9g" event={"ID":"51908721-b3a6-4ecb-b0bc-041a43ecba5e","Type":"ContainerDied","Data":"4aeb299b4aee2efe0c1e1e3176138562df162c93b53a75144f200e250528597e"} Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.889678 4858 generic.go:334] "Generic (PLEG): container finished" podID="9e804b92-5b91-414c-ab96-2c679b264a85" containerID="de54cf581f700a3cca648b830e80883b34f157cc68f5258f788fc169cf3b75b2" exitCode=0 Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.889737 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-cqkx4" event={"ID":"9e804b92-5b91-414c-ab96-2c679b264a85","Type":"ContainerDied","Data":"de54cf581f700a3cca648b830e80883b34f157cc68f5258f788fc169cf3b75b2"} Feb 02 17:32:41 crc kubenswrapper[4858]: I0202 17:32:41.902192 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7748685595-fdxjj" Feb 02 17:32:42 crc kubenswrapper[4858]: I0202 17:32:42.011672 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5765cfccfc-zqg5s" Feb 02 17:32:42 crc kubenswrapper[4858]: I0202 17:32:42.069567 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-78bb7f4c66-lspk6"] Feb 02 17:32:42 crc kubenswrapper[4858]: I0202 17:32:42.069784 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-78bb7f4c66-lspk6" podUID="1f84b369-07ee-4a29-8f3b-be71b0e37772" containerName="neutron-api" containerID="cri-o://88d5a8458461dff54a5540e571394cffbc63159b3ace189e15c09d4ac2be2e59" gracePeriod=30 Feb 02 17:32:42 crc kubenswrapper[4858]: I0202 17:32:42.070117 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-78bb7f4c66-lspk6" podUID="1f84b369-07ee-4a29-8f3b-be71b0e37772" containerName="neutron-httpd" containerID="cri-o://1a983ea7418ec1f2f1a01f2c087b7761d05e25515efe2b33c6827c1edfd8f1e8" gracePeriod=30 Feb 02 17:32:42 crc kubenswrapper[4858]: I0202 17:32:42.101422 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:32:42 crc kubenswrapper[4858]: I0202 17:32:42.412274 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fef11f2-a89c-48f7-b0e8-4ed3045e028e" path="/var/lib/kubelet/pods/0fef11f2-a89c-48f7-b0e8-4ed3045e028e/volumes" Feb 02 17:32:42 crc kubenswrapper[4858]: I0202 17:32:42.635181 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:32:42 crc kubenswrapper[4858]: I0202 17:32:42.900034 4858 generic.go:334] "Generic (PLEG): container finished" podID="1f84b369-07ee-4a29-8f3b-be71b0e37772" containerID="1a983ea7418ec1f2f1a01f2c087b7761d05e25515efe2b33c6827c1edfd8f1e8" exitCode=0 Feb 02 17:32:42 crc kubenswrapper[4858]: I0202 17:32:42.900100 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78bb7f4c66-lspk6" event={"ID":"1f84b369-07ee-4a29-8f3b-be71b0e37772","Type":"ContainerDied","Data":"1a983ea7418ec1f2f1a01f2c087b7761d05e25515efe2b33c6827c1edfd8f1e8"} Feb 02 17:32:42 crc kubenswrapper[4858]: I0202 17:32:42.902067 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac5f73e4-c510-46a1-a0a5-1f8291a58339","Type":"ContainerStarted","Data":"c7659a70e9fe979cae25906f80227d33ac8724318103af8ec26148cda948910c"} Feb 02 17:32:42 crc kubenswrapper[4858]: I0202 17:32:42.902098 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac5f73e4-c510-46a1-a0a5-1f8291a58339","Type":"ContainerStarted","Data":"c422a5607620d909e8f684c3b55e3acbe5c23dc7aa2b197bc3dda540bbec0017"} Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.495676 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cpn8d" Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.587420 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-67p9g" Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.618041 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-45eb-account-create-update-hxbqj" Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.627967 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4698-account-create-update-lnpzg" Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.646532 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-cqkx4" Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.661334 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ce0f-account-create-update-cg8px" Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.677581 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hl4pd\" (UniqueName: \"kubernetes.io/projected/e58dfef0-aeb5-4f3d-bf54-f4c51cf88901-kube-api-access-hl4pd\") pod \"e58dfef0-aeb5-4f3d-bf54-f4c51cf88901\" (UID: \"e58dfef0-aeb5-4f3d-bf54-f4c51cf88901\") " Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.677645 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvps6\" (UniqueName: \"kubernetes.io/projected/51908721-b3a6-4ecb-b0bc-041a43ecba5e-kube-api-access-cvps6\") pod \"51908721-b3a6-4ecb-b0bc-041a43ecba5e\" (UID: \"51908721-b3a6-4ecb-b0bc-041a43ecba5e\") " Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.677685 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51908721-b3a6-4ecb-b0bc-041a43ecba5e-operator-scripts\") pod \"51908721-b3a6-4ecb-b0bc-041a43ecba5e\" (UID: \"51908721-b3a6-4ecb-b0bc-041a43ecba5e\") " Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.677743 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e58dfef0-aeb5-4f3d-bf54-f4c51cf88901-operator-scripts\") pod \"e58dfef0-aeb5-4f3d-bf54-f4c51cf88901\" (UID: \"e58dfef0-aeb5-4f3d-bf54-f4c51cf88901\") " Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.678919 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51908721-b3a6-4ecb-b0bc-041a43ecba5e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "51908721-b3a6-4ecb-b0bc-041a43ecba5e" (UID: "51908721-b3a6-4ecb-b0bc-041a43ecba5e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.679797 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e58dfef0-aeb5-4f3d-bf54-f4c51cf88901-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e58dfef0-aeb5-4f3d-bf54-f4c51cf88901" (UID: "e58dfef0-aeb5-4f3d-bf54-f4c51cf88901"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.684404 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e58dfef0-aeb5-4f3d-bf54-f4c51cf88901-kube-api-access-hl4pd" (OuterVolumeSpecName: "kube-api-access-hl4pd") pod "e58dfef0-aeb5-4f3d-bf54-f4c51cf88901" (UID: "e58dfef0-aeb5-4f3d-bf54-f4c51cf88901"). InnerVolumeSpecName "kube-api-access-hl4pd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.685214 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51908721-b3a6-4ecb-b0bc-041a43ecba5e-kube-api-access-cvps6" (OuterVolumeSpecName: "kube-api-access-cvps6") pod "51908721-b3a6-4ecb-b0bc-041a43ecba5e" (UID: "51908721-b3a6-4ecb-b0bc-041a43ecba5e"). InnerVolumeSpecName "kube-api-access-cvps6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.779368 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e804b92-5b91-414c-ab96-2c679b264a85-operator-scripts\") pod \"9e804b92-5b91-414c-ab96-2c679b264a85\" (UID: \"9e804b92-5b91-414c-ab96-2c679b264a85\") " Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.779677 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-447s8\" (UniqueName: \"kubernetes.io/projected/e6858894-d212-4bb0-a6dc-5e7633b29b58-kube-api-access-447s8\") pod \"e6858894-d212-4bb0-a6dc-5e7633b29b58\" (UID: \"e6858894-d212-4bb0-a6dc-5e7633b29b58\") " Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.779747 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3e75f5b-d129-4f48-b69c-35f0fd329c2b-operator-scripts\") pod \"b3e75f5b-d129-4f48-b69c-35f0fd329c2b\" (UID: \"b3e75f5b-d129-4f48-b69c-35f0fd329c2b\") " Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.779781 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67jxp\" (UniqueName: \"kubernetes.io/projected/d0fd6b61-532c-4002-bc57-c692aa8255f2-kube-api-access-67jxp\") pod \"d0fd6b61-532c-4002-bc57-c692aa8255f2\" (UID: \"d0fd6b61-532c-4002-bc57-c692aa8255f2\") " Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.779884 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0fd6b61-532c-4002-bc57-c692aa8255f2-operator-scripts\") pod \"d0fd6b61-532c-4002-bc57-c692aa8255f2\" (UID: \"d0fd6b61-532c-4002-bc57-c692aa8255f2\") " Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.779912 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6858894-d212-4bb0-a6dc-5e7633b29b58-operator-scripts\") pod \"e6858894-d212-4bb0-a6dc-5e7633b29b58\" (UID: \"e6858894-d212-4bb0-a6dc-5e7633b29b58\") " Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.780066 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffl56\" (UniqueName: \"kubernetes.io/projected/9e804b92-5b91-414c-ab96-2c679b264a85-kube-api-access-ffl56\") pod \"9e804b92-5b91-414c-ab96-2c679b264a85\" (UID: \"9e804b92-5b91-414c-ab96-2c679b264a85\") " Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.780111 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59t5v\" (UniqueName: \"kubernetes.io/projected/b3e75f5b-d129-4f48-b69c-35f0fd329c2b-kube-api-access-59t5v\") pod \"b3e75f5b-d129-4f48-b69c-35f0fd329c2b\" (UID: \"b3e75f5b-d129-4f48-b69c-35f0fd329c2b\") " Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.780595 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hl4pd\" (UniqueName: \"kubernetes.io/projected/e58dfef0-aeb5-4f3d-bf54-f4c51cf88901-kube-api-access-hl4pd\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.780624 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvps6\" (UniqueName: \"kubernetes.io/projected/51908721-b3a6-4ecb-b0bc-041a43ecba5e-kube-api-access-cvps6\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.780640 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51908721-b3a6-4ecb-b0bc-041a43ecba5e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.780651 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e58dfef0-aeb5-4f3d-bf54-f4c51cf88901-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.781439 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e804b92-5b91-414c-ab96-2c679b264a85-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9e804b92-5b91-414c-ab96-2c679b264a85" (UID: "9e804b92-5b91-414c-ab96-2c679b264a85"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.782022 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6858894-d212-4bb0-a6dc-5e7633b29b58-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e6858894-d212-4bb0-a6dc-5e7633b29b58" (UID: "e6858894-d212-4bb0-a6dc-5e7633b29b58"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.782479 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0fd6b61-532c-4002-bc57-c692aa8255f2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d0fd6b61-532c-4002-bc57-c692aa8255f2" (UID: "d0fd6b61-532c-4002-bc57-c692aa8255f2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.783756 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0fd6b61-532c-4002-bc57-c692aa8255f2-kube-api-access-67jxp" (OuterVolumeSpecName: "kube-api-access-67jxp") pod "d0fd6b61-532c-4002-bc57-c692aa8255f2" (UID: "d0fd6b61-532c-4002-bc57-c692aa8255f2"). InnerVolumeSpecName "kube-api-access-67jxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.787435 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3e75f5b-d129-4f48-b69c-35f0fd329c2b-kube-api-access-59t5v" (OuterVolumeSpecName: "kube-api-access-59t5v") pod "b3e75f5b-d129-4f48-b69c-35f0fd329c2b" (UID: "b3e75f5b-d129-4f48-b69c-35f0fd329c2b"). InnerVolumeSpecName "kube-api-access-59t5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.787736 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6858894-d212-4bb0-a6dc-5e7633b29b58-kube-api-access-447s8" (OuterVolumeSpecName: "kube-api-access-447s8") pod "e6858894-d212-4bb0-a6dc-5e7633b29b58" (UID: "e6858894-d212-4bb0-a6dc-5e7633b29b58"). InnerVolumeSpecName "kube-api-access-447s8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.788144 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3e75f5b-d129-4f48-b69c-35f0fd329c2b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b3e75f5b-d129-4f48-b69c-35f0fd329c2b" (UID: "b3e75f5b-d129-4f48-b69c-35f0fd329c2b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.795095 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e804b92-5b91-414c-ab96-2c679b264a85-kube-api-access-ffl56" (OuterVolumeSpecName: "kube-api-access-ffl56") pod "9e804b92-5b91-414c-ab96-2c679b264a85" (UID: "9e804b92-5b91-414c-ab96-2c679b264a85"). InnerVolumeSpecName "kube-api-access-ffl56". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.882545 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffl56\" (UniqueName: \"kubernetes.io/projected/9e804b92-5b91-414c-ab96-2c679b264a85-kube-api-access-ffl56\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.882587 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59t5v\" (UniqueName: \"kubernetes.io/projected/b3e75f5b-d129-4f48-b69c-35f0fd329c2b-kube-api-access-59t5v\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.882601 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e804b92-5b91-414c-ab96-2c679b264a85-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.882616 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-447s8\" (UniqueName: \"kubernetes.io/projected/e6858894-d212-4bb0-a6dc-5e7633b29b58-kube-api-access-447s8\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.882628 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3e75f5b-d129-4f48-b69c-35f0fd329c2b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.882639 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67jxp\" (UniqueName: \"kubernetes.io/projected/d0fd6b61-532c-4002-bc57-c692aa8255f2-kube-api-access-67jxp\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.882651 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0fd6b61-532c-4002-bc57-c692aa8255f2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.882661 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6858894-d212-4bb0-a6dc-5e7633b29b58-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.911551 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4698-account-create-update-lnpzg" event={"ID":"d0fd6b61-532c-4002-bc57-c692aa8255f2","Type":"ContainerDied","Data":"b0e9064f21d7dd60dd24c423c8bb770f711abf4e0ad650a2aef2b0eb54d1bf1d"} Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.911589 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0e9064f21d7dd60dd24c423c8bb770f711abf4e0ad650a2aef2b0eb54d1bf1d" Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.911634 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4698-account-create-update-lnpzg" Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.913805 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac5f73e4-c510-46a1-a0a5-1f8291a58339","Type":"ContainerStarted","Data":"4459218d70c6944f401c87957b937a33a64faee542c41b3c309b078fca371f6d"} Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.915236 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ce0f-account-create-update-cg8px" Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.915272 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ce0f-account-create-update-cg8px" event={"ID":"e6858894-d212-4bb0-a6dc-5e7633b29b58","Type":"ContainerDied","Data":"d2d28623d44b008c6ca17b7bf3636445861e91ff5b2b1064e087a38a2551a0a3"} Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.915332 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2d28623d44b008c6ca17b7bf3636445861e91ff5b2b1064e087a38a2551a0a3" Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.916733 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-45eb-account-create-update-hxbqj" event={"ID":"b3e75f5b-d129-4f48-b69c-35f0fd329c2b","Type":"ContainerDied","Data":"8e1b1075a9b86d82761be189f49a76ba10074841d83581cce8305d356972374a"} Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.916750 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-45eb-account-create-update-hxbqj" Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.916761 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e1b1075a9b86d82761be189f49a76ba10074841d83581cce8305d356972374a" Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.919132 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-67p9g" event={"ID":"51908721-b3a6-4ecb-b0bc-041a43ecba5e","Type":"ContainerDied","Data":"77df6881c8cb19069dbe70ee88d5aa446091814c10f384b574632467f8ba6b4d"} Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.919161 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77df6881c8cb19069dbe70ee88d5aa446091814c10f384b574632467f8ba6b4d" Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.919207 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-67p9g" Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.924091 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-cqkx4" event={"ID":"9e804b92-5b91-414c-ab96-2c679b264a85","Type":"ContainerDied","Data":"1b5d9c7ca02a3a937d2d6de479dc3f611a76c7466fbb631762d163bebbc6f7f5"} Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.924129 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b5d9c7ca02a3a937d2d6de479dc3f611a76c7466fbb631762d163bebbc6f7f5" Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.924186 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-cqkx4" Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.932931 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-cpn8d" Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.933334 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-cpn8d" event={"ID":"e58dfef0-aeb5-4f3d-bf54-f4c51cf88901","Type":"ContainerDied","Data":"b2f8b1634bce4aec88aad8e0286a6d4a4c158537e27b532751fe236a9dd8aa41"} Feb 02 17:32:43 crc kubenswrapper[4858]: I0202 17:32:43.933379 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2f8b1634bce4aec88aad8e0286a6d4a4c158537e27b532751fe236a9dd8aa41" Feb 02 17:32:44 crc kubenswrapper[4858]: I0202 17:32:44.943688 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac5f73e4-c510-46a1-a0a5-1f8291a58339","Type":"ContainerStarted","Data":"2621fba36fd002bf17ceb1117bb5130ee864bf81e83d1b673e3d9285de94d262"} Feb 02 17:32:46 crc kubenswrapper[4858]: I0202 17:32:46.961737 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac5f73e4-c510-46a1-a0a5-1f8291a58339","Type":"ContainerStarted","Data":"6026404fd6743e900c54c598ff19cf2f9ce23b0afe99d5c3511ee477f4858c32"} Feb 02 17:32:46 crc kubenswrapper[4858]: I0202 17:32:46.962401 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 17:32:46 crc kubenswrapper[4858]: I0202 17:32:46.961992 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ac5f73e4-c510-46a1-a0a5-1f8291a58339" containerName="proxy-httpd" containerID="cri-o://6026404fd6743e900c54c598ff19cf2f9ce23b0afe99d5c3511ee477f4858c32" gracePeriod=30 Feb 02 17:32:46 crc kubenswrapper[4858]: I0202 17:32:46.961867 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ac5f73e4-c510-46a1-a0a5-1f8291a58339" containerName="ceilometer-central-agent" containerID="cri-o://c7659a70e9fe979cae25906f80227d33ac8724318103af8ec26148cda948910c" gracePeriod=30 Feb 02 17:32:46 crc kubenswrapper[4858]: I0202 17:32:46.961937 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ac5f73e4-c510-46a1-a0a5-1f8291a58339" containerName="sg-core" containerID="cri-o://2621fba36fd002bf17ceb1117bb5130ee864bf81e83d1b673e3d9285de94d262" gracePeriod=30 Feb 02 17:32:46 crc kubenswrapper[4858]: I0202 17:32:46.962032 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ac5f73e4-c510-46a1-a0a5-1f8291a58339" containerName="ceilometer-notification-agent" containerID="cri-o://4459218d70c6944f401c87957b937a33a64faee542c41b3c309b078fca371f6d" gracePeriod=30 Feb 02 17:32:46 crc kubenswrapper[4858]: I0202 17:32:46.988132 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.514929481 podStartE2EDuration="5.988116135s" podCreationTimestamp="2026-02-02 17:32:41 +0000 UTC" firstStartedPulling="2026-02-02 17:32:42.104333343 +0000 UTC m=+1063.256748608" lastFinishedPulling="2026-02-02 17:32:46.577519997 +0000 UTC m=+1067.729935262" observedRunningTime="2026-02-02 17:32:46.987635241 +0000 UTC m=+1068.140050506" watchObservedRunningTime="2026-02-02 17:32:46.988116135 +0000 UTC m=+1068.140531400" Feb 02 17:32:47 crc kubenswrapper[4858]: E0202 17:32:47.139758 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac5f73e4_c510_46a1_a0a5_1f8291a58339.slice/crio-2621fba36fd002bf17ceb1117bb5130ee864bf81e83d1b673e3d9285de94d262.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac5f73e4_c510_46a1_a0a5_1f8291a58339.slice/crio-conmon-6026404fd6743e900c54c598ff19cf2f9ce23b0afe99d5c3511ee477f4858c32.scope\": RecentStats: unable to find data in memory cache]" Feb 02 17:32:47 crc kubenswrapper[4858]: I0202 17:32:47.332683 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-66l2r"] Feb 02 17:32:47 crc kubenswrapper[4858]: E0202 17:32:47.333453 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e58dfef0-aeb5-4f3d-bf54-f4c51cf88901" containerName="mariadb-database-create" Feb 02 17:32:47 crc kubenswrapper[4858]: I0202 17:32:47.333469 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e58dfef0-aeb5-4f3d-bf54-f4c51cf88901" containerName="mariadb-database-create" Feb 02 17:32:47 crc kubenswrapper[4858]: E0202 17:32:47.333480 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0fd6b61-532c-4002-bc57-c692aa8255f2" containerName="mariadb-account-create-update" Feb 02 17:32:47 crc kubenswrapper[4858]: I0202 17:32:47.333486 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0fd6b61-532c-4002-bc57-c692aa8255f2" containerName="mariadb-account-create-update" Feb 02 17:32:47 crc kubenswrapper[4858]: E0202 17:32:47.333504 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6858894-d212-4bb0-a6dc-5e7633b29b58" containerName="mariadb-account-create-update" Feb 02 17:32:47 crc kubenswrapper[4858]: I0202 17:32:47.333514 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6858894-d212-4bb0-a6dc-5e7633b29b58" containerName="mariadb-account-create-update" Feb 02 17:32:47 crc kubenswrapper[4858]: E0202 17:32:47.333532 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3e75f5b-d129-4f48-b69c-35f0fd329c2b" containerName="mariadb-account-create-update" Feb 02 17:32:47 crc kubenswrapper[4858]: I0202 17:32:47.333539 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3e75f5b-d129-4f48-b69c-35f0fd329c2b" containerName="mariadb-account-create-update" Feb 02 17:32:47 crc kubenswrapper[4858]: E0202 17:32:47.333551 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51908721-b3a6-4ecb-b0bc-041a43ecba5e" containerName="mariadb-database-create" Feb 02 17:32:47 crc kubenswrapper[4858]: I0202 17:32:47.333559 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="51908721-b3a6-4ecb-b0bc-041a43ecba5e" containerName="mariadb-database-create" Feb 02 17:32:47 crc kubenswrapper[4858]: E0202 17:32:47.333579 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e804b92-5b91-414c-ab96-2c679b264a85" containerName="mariadb-database-create" Feb 02 17:32:47 crc kubenswrapper[4858]: I0202 17:32:47.333585 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e804b92-5b91-414c-ab96-2c679b264a85" containerName="mariadb-database-create" Feb 02 17:32:47 crc kubenswrapper[4858]: I0202 17:32:47.333758 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6858894-d212-4bb0-a6dc-5e7633b29b58" containerName="mariadb-account-create-update" Feb 02 17:32:47 crc kubenswrapper[4858]: I0202 17:32:47.333774 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e58dfef0-aeb5-4f3d-bf54-f4c51cf88901" containerName="mariadb-database-create" Feb 02 17:32:47 crc kubenswrapper[4858]: I0202 17:32:47.333788 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="51908721-b3a6-4ecb-b0bc-041a43ecba5e" containerName="mariadb-database-create" Feb 02 17:32:47 crc kubenswrapper[4858]: I0202 17:32:47.333803 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e804b92-5b91-414c-ab96-2c679b264a85" containerName="mariadb-database-create" Feb 02 17:32:47 crc kubenswrapper[4858]: I0202 17:32:47.333816 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3e75f5b-d129-4f48-b69c-35f0fd329c2b" containerName="mariadb-account-create-update" Feb 02 17:32:47 crc kubenswrapper[4858]: I0202 17:32:47.333827 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0fd6b61-532c-4002-bc57-c692aa8255f2" containerName="mariadb-account-create-update" Feb 02 17:32:47 crc kubenswrapper[4858]: I0202 17:32:47.334564 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-66l2r" Feb 02 17:32:47 crc kubenswrapper[4858]: I0202 17:32:47.337151 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 02 17:32:47 crc kubenswrapper[4858]: I0202 17:32:47.337169 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 02 17:32:47 crc kubenswrapper[4858]: I0202 17:32:47.337775 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-8qp9j" Feb 02 17:32:47 crc kubenswrapper[4858]: I0202 17:32:47.366896 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-66l2r"] Feb 02 17:32:47 crc kubenswrapper[4858]: I0202 17:32:47.464345 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3a30ec7-1686-4aa1-b365-1d0516dda2eb-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-66l2r\" (UID: \"f3a30ec7-1686-4aa1-b365-1d0516dda2eb\") " pod="openstack/nova-cell0-conductor-db-sync-66l2r" Feb 02 17:32:47 crc kubenswrapper[4858]: I0202 17:32:47.464396 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgfsj\" (UniqueName: \"kubernetes.io/projected/f3a30ec7-1686-4aa1-b365-1d0516dda2eb-kube-api-access-fgfsj\") pod \"nova-cell0-conductor-db-sync-66l2r\" (UID: \"f3a30ec7-1686-4aa1-b365-1d0516dda2eb\") " pod="openstack/nova-cell0-conductor-db-sync-66l2r" Feb 02 17:32:47 crc kubenswrapper[4858]: I0202 17:32:47.464427 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3a30ec7-1686-4aa1-b365-1d0516dda2eb-scripts\") pod \"nova-cell0-conductor-db-sync-66l2r\" (UID: \"f3a30ec7-1686-4aa1-b365-1d0516dda2eb\") " pod="openstack/nova-cell0-conductor-db-sync-66l2r" Feb 02 17:32:47 crc kubenswrapper[4858]: I0202 17:32:47.464586 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3a30ec7-1686-4aa1-b365-1d0516dda2eb-config-data\") pod \"nova-cell0-conductor-db-sync-66l2r\" (UID: \"f3a30ec7-1686-4aa1-b365-1d0516dda2eb\") " pod="openstack/nova-cell0-conductor-db-sync-66l2r" Feb 02 17:32:47 crc kubenswrapper[4858]: I0202 17:32:47.566670 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3a30ec7-1686-4aa1-b365-1d0516dda2eb-config-data\") pod \"nova-cell0-conductor-db-sync-66l2r\" (UID: \"f3a30ec7-1686-4aa1-b365-1d0516dda2eb\") " pod="openstack/nova-cell0-conductor-db-sync-66l2r" Feb 02 17:32:47 crc kubenswrapper[4858]: I0202 17:32:47.566738 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3a30ec7-1686-4aa1-b365-1d0516dda2eb-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-66l2r\" (UID: \"f3a30ec7-1686-4aa1-b365-1d0516dda2eb\") " pod="openstack/nova-cell0-conductor-db-sync-66l2r" Feb 02 17:32:47 crc kubenswrapper[4858]: I0202 17:32:47.566778 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgfsj\" (UniqueName: \"kubernetes.io/projected/f3a30ec7-1686-4aa1-b365-1d0516dda2eb-kube-api-access-fgfsj\") pod \"nova-cell0-conductor-db-sync-66l2r\" (UID: \"f3a30ec7-1686-4aa1-b365-1d0516dda2eb\") " pod="openstack/nova-cell0-conductor-db-sync-66l2r" Feb 02 17:32:47 crc kubenswrapper[4858]: I0202 17:32:47.566816 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3a30ec7-1686-4aa1-b365-1d0516dda2eb-scripts\") pod \"nova-cell0-conductor-db-sync-66l2r\" (UID: \"f3a30ec7-1686-4aa1-b365-1d0516dda2eb\") " pod="openstack/nova-cell0-conductor-db-sync-66l2r" Feb 02 17:32:47 crc kubenswrapper[4858]: I0202 17:32:47.573260 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3a30ec7-1686-4aa1-b365-1d0516dda2eb-scripts\") pod \"nova-cell0-conductor-db-sync-66l2r\" (UID: \"f3a30ec7-1686-4aa1-b365-1d0516dda2eb\") " pod="openstack/nova-cell0-conductor-db-sync-66l2r" Feb 02 17:32:47 crc kubenswrapper[4858]: I0202 17:32:47.573593 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3a30ec7-1686-4aa1-b365-1d0516dda2eb-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-66l2r\" (UID: \"f3a30ec7-1686-4aa1-b365-1d0516dda2eb\") " pod="openstack/nova-cell0-conductor-db-sync-66l2r" Feb 02 17:32:47 crc kubenswrapper[4858]: I0202 17:32:47.575127 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3a30ec7-1686-4aa1-b365-1d0516dda2eb-config-data\") pod \"nova-cell0-conductor-db-sync-66l2r\" (UID: \"f3a30ec7-1686-4aa1-b365-1d0516dda2eb\") " pod="openstack/nova-cell0-conductor-db-sync-66l2r" Feb 02 17:32:47 crc kubenswrapper[4858]: I0202 17:32:47.583879 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgfsj\" (UniqueName: \"kubernetes.io/projected/f3a30ec7-1686-4aa1-b365-1d0516dda2eb-kube-api-access-fgfsj\") pod \"nova-cell0-conductor-db-sync-66l2r\" (UID: \"f3a30ec7-1686-4aa1-b365-1d0516dda2eb\") " pod="openstack/nova-cell0-conductor-db-sync-66l2r" Feb 02 17:32:47 crc kubenswrapper[4858]: I0202 17:32:47.653433 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-66l2r" Feb 02 17:32:47 crc kubenswrapper[4858]: I0202 17:32:47.993627 4858 generic.go:334] "Generic (PLEG): container finished" podID="1f84b369-07ee-4a29-8f3b-be71b0e37772" containerID="88d5a8458461dff54a5540e571394cffbc63159b3ace189e15c09d4ac2be2e59" exitCode=0 Feb 02 17:32:47 crc kubenswrapper[4858]: I0202 17:32:47.993703 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78bb7f4c66-lspk6" event={"ID":"1f84b369-07ee-4a29-8f3b-be71b0e37772","Type":"ContainerDied","Data":"88d5a8458461dff54a5540e571394cffbc63159b3ace189e15c09d4ac2be2e59"} Feb 02 17:32:47 crc kubenswrapper[4858]: I0202 17:32:47.997944 4858 generic.go:334] "Generic (PLEG): container finished" podID="ac5f73e4-c510-46a1-a0a5-1f8291a58339" containerID="6026404fd6743e900c54c598ff19cf2f9ce23b0afe99d5c3511ee477f4858c32" exitCode=0 Feb 02 17:32:47 crc kubenswrapper[4858]: I0202 17:32:47.997991 4858 generic.go:334] "Generic (PLEG): container finished" podID="ac5f73e4-c510-46a1-a0a5-1f8291a58339" containerID="2621fba36fd002bf17ceb1117bb5130ee864bf81e83d1b673e3d9285de94d262" exitCode=2 Feb 02 17:32:47 crc kubenswrapper[4858]: I0202 17:32:47.998002 4858 generic.go:334] "Generic (PLEG): container finished" podID="ac5f73e4-c510-46a1-a0a5-1f8291a58339" containerID="4459218d70c6944f401c87957b937a33a64faee542c41b3c309b078fca371f6d" exitCode=0 Feb 02 17:32:47 crc kubenswrapper[4858]: I0202 17:32:47.998014 4858 generic.go:334] "Generic (PLEG): container finished" podID="ac5f73e4-c510-46a1-a0a5-1f8291a58339" containerID="c7659a70e9fe979cae25906f80227d33ac8724318103af8ec26148cda948910c" exitCode=0 Feb 02 17:32:47 crc kubenswrapper[4858]: I0202 17:32:47.998036 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac5f73e4-c510-46a1-a0a5-1f8291a58339","Type":"ContainerDied","Data":"6026404fd6743e900c54c598ff19cf2f9ce23b0afe99d5c3511ee477f4858c32"} Feb 02 17:32:47 crc kubenswrapper[4858]: I0202 17:32:47.998073 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac5f73e4-c510-46a1-a0a5-1f8291a58339","Type":"ContainerDied","Data":"2621fba36fd002bf17ceb1117bb5130ee864bf81e83d1b673e3d9285de94d262"} Feb 02 17:32:47 crc kubenswrapper[4858]: I0202 17:32:47.998088 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac5f73e4-c510-46a1-a0a5-1f8291a58339","Type":"ContainerDied","Data":"4459218d70c6944f401c87957b937a33a64faee542c41b3c309b078fca371f6d"} Feb 02 17:32:47 crc kubenswrapper[4858]: I0202 17:32:47.998098 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac5f73e4-c510-46a1-a0a5-1f8291a58339","Type":"ContainerDied","Data":"c7659a70e9fe979cae25906f80227d33ac8724318103af8ec26148cda948910c"} Feb 02 17:32:47 crc kubenswrapper[4858]: I0202 17:32:47.998108 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac5f73e4-c510-46a1-a0a5-1f8291a58339","Type":"ContainerDied","Data":"c422a5607620d909e8f684c3b55e3acbe5c23dc7aa2b197bc3dda540bbec0017"} Feb 02 17:32:47 crc kubenswrapper[4858]: I0202 17:32:47.998119 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c422a5607620d909e8f684c3b55e3acbe5c23dc7aa2b197bc3dda540bbec0017" Feb 02 17:32:48 crc kubenswrapper[4858]: I0202 17:32:48.055810 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 17:32:48 crc kubenswrapper[4858]: I0202 17:32:48.185860 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2p9b\" (UniqueName: \"kubernetes.io/projected/ac5f73e4-c510-46a1-a0a5-1f8291a58339-kube-api-access-x2p9b\") pod \"ac5f73e4-c510-46a1-a0a5-1f8291a58339\" (UID: \"ac5f73e4-c510-46a1-a0a5-1f8291a58339\") " Feb 02 17:32:48 crc kubenswrapper[4858]: I0202 17:32:48.186113 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac5f73e4-c510-46a1-a0a5-1f8291a58339-sg-core-conf-yaml\") pod \"ac5f73e4-c510-46a1-a0a5-1f8291a58339\" (UID: \"ac5f73e4-c510-46a1-a0a5-1f8291a58339\") " Feb 02 17:32:48 crc kubenswrapper[4858]: I0202 17:32:48.186159 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac5f73e4-c510-46a1-a0a5-1f8291a58339-combined-ca-bundle\") pod \"ac5f73e4-c510-46a1-a0a5-1f8291a58339\" (UID: \"ac5f73e4-c510-46a1-a0a5-1f8291a58339\") " Feb 02 17:32:48 crc kubenswrapper[4858]: I0202 17:32:48.186209 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac5f73e4-c510-46a1-a0a5-1f8291a58339-config-data\") pod \"ac5f73e4-c510-46a1-a0a5-1f8291a58339\" (UID: \"ac5f73e4-c510-46a1-a0a5-1f8291a58339\") " Feb 02 17:32:48 crc kubenswrapper[4858]: I0202 17:32:48.186230 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac5f73e4-c510-46a1-a0a5-1f8291a58339-run-httpd\") pod \"ac5f73e4-c510-46a1-a0a5-1f8291a58339\" (UID: \"ac5f73e4-c510-46a1-a0a5-1f8291a58339\") " Feb 02 17:32:48 crc kubenswrapper[4858]: I0202 17:32:48.186291 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac5f73e4-c510-46a1-a0a5-1f8291a58339-log-httpd\") pod \"ac5f73e4-c510-46a1-a0a5-1f8291a58339\" (UID: \"ac5f73e4-c510-46a1-a0a5-1f8291a58339\") " Feb 02 17:32:48 crc kubenswrapper[4858]: I0202 17:32:48.186314 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac5f73e4-c510-46a1-a0a5-1f8291a58339-scripts\") pod \"ac5f73e4-c510-46a1-a0a5-1f8291a58339\" (UID: \"ac5f73e4-c510-46a1-a0a5-1f8291a58339\") " Feb 02 17:32:48 crc kubenswrapper[4858]: I0202 17:32:48.186928 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-66l2r"] Feb 02 17:32:48 crc kubenswrapper[4858]: I0202 17:32:48.187601 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac5f73e4-c510-46a1-a0a5-1f8291a58339-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ac5f73e4-c510-46a1-a0a5-1f8291a58339" (UID: "ac5f73e4-c510-46a1-a0a5-1f8291a58339"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:32:48 crc kubenswrapper[4858]: I0202 17:32:48.187783 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac5f73e4-c510-46a1-a0a5-1f8291a58339-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ac5f73e4-c510-46a1-a0a5-1f8291a58339" (UID: "ac5f73e4-c510-46a1-a0a5-1f8291a58339"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:32:48 crc kubenswrapper[4858]: I0202 17:32:48.193412 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac5f73e4-c510-46a1-a0a5-1f8291a58339-kube-api-access-x2p9b" (OuterVolumeSpecName: "kube-api-access-x2p9b") pod "ac5f73e4-c510-46a1-a0a5-1f8291a58339" (UID: "ac5f73e4-c510-46a1-a0a5-1f8291a58339"). InnerVolumeSpecName "kube-api-access-x2p9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:32:48 crc kubenswrapper[4858]: I0202 17:32:48.204284 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac5f73e4-c510-46a1-a0a5-1f8291a58339-scripts" (OuterVolumeSpecName: "scripts") pod "ac5f73e4-c510-46a1-a0a5-1f8291a58339" (UID: "ac5f73e4-c510-46a1-a0a5-1f8291a58339"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:48 crc kubenswrapper[4858]: I0202 17:32:48.236853 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac5f73e4-c510-46a1-a0a5-1f8291a58339-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ac5f73e4-c510-46a1-a0a5-1f8291a58339" (UID: "ac5f73e4-c510-46a1-a0a5-1f8291a58339"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:48 crc kubenswrapper[4858]: I0202 17:32:48.253309 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-78bb7f4c66-lspk6" Feb 02 17:32:48 crc kubenswrapper[4858]: I0202 17:32:48.259430 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac5f73e4-c510-46a1-a0a5-1f8291a58339-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac5f73e4-c510-46a1-a0a5-1f8291a58339" (UID: "ac5f73e4-c510-46a1-a0a5-1f8291a58339"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:48 crc kubenswrapper[4858]: I0202 17:32:48.288509 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2p9b\" (UniqueName: \"kubernetes.io/projected/ac5f73e4-c510-46a1-a0a5-1f8291a58339-kube-api-access-x2p9b\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:48 crc kubenswrapper[4858]: I0202 17:32:48.288534 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac5f73e4-c510-46a1-a0a5-1f8291a58339-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:48 crc kubenswrapper[4858]: I0202 17:32:48.288542 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac5f73e4-c510-46a1-a0a5-1f8291a58339-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:48 crc kubenswrapper[4858]: I0202 17:32:48.288551 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac5f73e4-c510-46a1-a0a5-1f8291a58339-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:48 crc kubenswrapper[4858]: I0202 17:32:48.288559 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac5f73e4-c510-46a1-a0a5-1f8291a58339-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:48 crc kubenswrapper[4858]: I0202 17:32:48.288569 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac5f73e4-c510-46a1-a0a5-1f8291a58339-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:48 crc kubenswrapper[4858]: I0202 17:32:48.292252 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7748685595-fdxjj" Feb 02 17:32:48 crc kubenswrapper[4858]: I0202 17:32:48.304272 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac5f73e4-c510-46a1-a0a5-1f8291a58339-config-data" (OuterVolumeSpecName: "config-data") pod "ac5f73e4-c510-46a1-a0a5-1f8291a58339" (UID: "ac5f73e4-c510-46a1-a0a5-1f8291a58339"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:48 crc kubenswrapper[4858]: I0202 17:32:48.391322 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9kqk\" (UniqueName: \"kubernetes.io/projected/1f84b369-07ee-4a29-8f3b-be71b0e37772-kube-api-access-t9kqk\") pod \"1f84b369-07ee-4a29-8f3b-be71b0e37772\" (UID: \"1f84b369-07ee-4a29-8f3b-be71b0e37772\") " Feb 02 17:32:48 crc kubenswrapper[4858]: I0202 17:32:48.391400 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1f84b369-07ee-4a29-8f3b-be71b0e37772-httpd-config\") pod \"1f84b369-07ee-4a29-8f3b-be71b0e37772\" (UID: \"1f84b369-07ee-4a29-8f3b-be71b0e37772\") " Feb 02 17:32:48 crc kubenswrapper[4858]: I0202 17:32:48.391476 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f84b369-07ee-4a29-8f3b-be71b0e37772-ovndb-tls-certs\") pod \"1f84b369-07ee-4a29-8f3b-be71b0e37772\" (UID: \"1f84b369-07ee-4a29-8f3b-be71b0e37772\") " Feb 02 17:32:48 crc kubenswrapper[4858]: I0202 17:32:48.391566 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f84b369-07ee-4a29-8f3b-be71b0e37772-combined-ca-bundle\") pod \"1f84b369-07ee-4a29-8f3b-be71b0e37772\" (UID: \"1f84b369-07ee-4a29-8f3b-be71b0e37772\") " Feb 02 17:32:48 crc kubenswrapper[4858]: I0202 17:32:48.391621 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1f84b369-07ee-4a29-8f3b-be71b0e37772-config\") pod \"1f84b369-07ee-4a29-8f3b-be71b0e37772\" (UID: \"1f84b369-07ee-4a29-8f3b-be71b0e37772\") " Feb 02 17:32:48 crc kubenswrapper[4858]: I0202 17:32:48.392260 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac5f73e4-c510-46a1-a0a5-1f8291a58339-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:48 crc kubenswrapper[4858]: I0202 17:32:48.395087 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f84b369-07ee-4a29-8f3b-be71b0e37772-kube-api-access-t9kqk" (OuterVolumeSpecName: "kube-api-access-t9kqk") pod "1f84b369-07ee-4a29-8f3b-be71b0e37772" (UID: "1f84b369-07ee-4a29-8f3b-be71b0e37772"). InnerVolumeSpecName "kube-api-access-t9kqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:32:48 crc kubenswrapper[4858]: I0202 17:32:48.401183 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f84b369-07ee-4a29-8f3b-be71b0e37772-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "1f84b369-07ee-4a29-8f3b-be71b0e37772" (UID: "1f84b369-07ee-4a29-8f3b-be71b0e37772"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:48 crc kubenswrapper[4858]: I0202 17:32:48.443159 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f84b369-07ee-4a29-8f3b-be71b0e37772-config" (OuterVolumeSpecName: "config") pod "1f84b369-07ee-4a29-8f3b-be71b0e37772" (UID: "1f84b369-07ee-4a29-8f3b-be71b0e37772"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:48 crc kubenswrapper[4858]: I0202 17:32:48.460993 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f84b369-07ee-4a29-8f3b-be71b0e37772-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f84b369-07ee-4a29-8f3b-be71b0e37772" (UID: "1f84b369-07ee-4a29-8f3b-be71b0e37772"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:48 crc kubenswrapper[4858]: I0202 17:32:48.491416 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f84b369-07ee-4a29-8f3b-be71b0e37772-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "1f84b369-07ee-4a29-8f3b-be71b0e37772" (UID: "1f84b369-07ee-4a29-8f3b-be71b0e37772"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:48 crc kubenswrapper[4858]: I0202 17:32:48.493340 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1f84b369-07ee-4a29-8f3b-be71b0e37772-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:48 crc kubenswrapper[4858]: I0202 17:32:48.493368 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9kqk\" (UniqueName: \"kubernetes.io/projected/1f84b369-07ee-4a29-8f3b-be71b0e37772-kube-api-access-t9kqk\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:48 crc kubenswrapper[4858]: I0202 17:32:48.493378 4858 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1f84b369-07ee-4a29-8f3b-be71b0e37772-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:48 crc kubenswrapper[4858]: I0202 17:32:48.493387 4858 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f84b369-07ee-4a29-8f3b-be71b0e37772-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:48 crc kubenswrapper[4858]: I0202 17:32:48.493396 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f84b369-07ee-4a29-8f3b-be71b0e37772-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.012722 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-66l2r" event={"ID":"f3a30ec7-1686-4aa1-b365-1d0516dda2eb","Type":"ContainerStarted","Data":"c7a08770a17aa5418d025306746148fa32150532477f1e9c75605422cf7b6c10"} Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.015330 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-78bb7f4c66-lspk6" event={"ID":"1f84b369-07ee-4a29-8f3b-be71b0e37772","Type":"ContainerDied","Data":"e31459d2368629f324d83381ad29e22c2805699ba187745818edb10864a32912"} Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.015398 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-78bb7f4c66-lspk6" Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.015418 4858 scope.go:117] "RemoveContainer" containerID="1a983ea7418ec1f2f1a01f2c087b7761d05e25515efe2b33c6827c1edfd8f1e8" Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.015353 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.036945 4858 scope.go:117] "RemoveContainer" containerID="88d5a8458461dff54a5540e571394cffbc63159b3ace189e15c09d4ac2be2e59" Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.086916 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.125072 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.142254 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-78bb7f4c66-lspk6"] Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.152393 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-78bb7f4c66-lspk6"] Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.167114 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:32:49 crc kubenswrapper[4858]: E0202 17:32:49.167764 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f84b369-07ee-4a29-8f3b-be71b0e37772" containerName="neutron-httpd" Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.167872 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f84b369-07ee-4a29-8f3b-be71b0e37772" containerName="neutron-httpd" Feb 02 17:32:49 crc kubenswrapper[4858]: E0202 17:32:49.167970 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac5f73e4-c510-46a1-a0a5-1f8291a58339" containerName="sg-core" Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.168057 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac5f73e4-c510-46a1-a0a5-1f8291a58339" containerName="sg-core" Feb 02 17:32:49 crc kubenswrapper[4858]: E0202 17:32:49.168131 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac5f73e4-c510-46a1-a0a5-1f8291a58339" containerName="ceilometer-notification-agent" Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.168193 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac5f73e4-c510-46a1-a0a5-1f8291a58339" containerName="ceilometer-notification-agent" Feb 02 17:32:49 crc kubenswrapper[4858]: E0202 17:32:49.168266 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac5f73e4-c510-46a1-a0a5-1f8291a58339" containerName="proxy-httpd" Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.168337 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac5f73e4-c510-46a1-a0a5-1f8291a58339" containerName="proxy-httpd" Feb 02 17:32:49 crc kubenswrapper[4858]: E0202 17:32:49.168429 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f84b369-07ee-4a29-8f3b-be71b0e37772" containerName="neutron-api" Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.168492 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f84b369-07ee-4a29-8f3b-be71b0e37772" containerName="neutron-api" Feb 02 17:32:49 crc kubenswrapper[4858]: E0202 17:32:49.168562 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac5f73e4-c510-46a1-a0a5-1f8291a58339" containerName="ceilometer-central-agent" Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.168625 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac5f73e4-c510-46a1-a0a5-1f8291a58339" containerName="ceilometer-central-agent" Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.168898 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac5f73e4-c510-46a1-a0a5-1f8291a58339" containerName="proxy-httpd" Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.168999 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f84b369-07ee-4a29-8f3b-be71b0e37772" containerName="neutron-httpd" Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.169085 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f84b369-07ee-4a29-8f3b-be71b0e37772" containerName="neutron-api" Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.169156 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac5f73e4-c510-46a1-a0a5-1f8291a58339" containerName="ceilometer-central-agent" Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.169223 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac5f73e4-c510-46a1-a0a5-1f8291a58339" containerName="ceilometer-notification-agent" Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.169311 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac5f73e4-c510-46a1-a0a5-1f8291a58339" containerName="sg-core" Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.171419 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.179484 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.184409 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.185772 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.307964 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77749f2c-ba94-4459-8f4d-14138f088356-log-httpd\") pod \"ceilometer-0\" (UID: \"77749f2c-ba94-4459-8f4d-14138f088356\") " pod="openstack/ceilometer-0" Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.308141 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77749f2c-ba94-4459-8f4d-14138f088356-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"77749f2c-ba94-4459-8f4d-14138f088356\") " pod="openstack/ceilometer-0" Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.308247 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77749f2c-ba94-4459-8f4d-14138f088356-run-httpd\") pod \"ceilometer-0\" (UID: \"77749f2c-ba94-4459-8f4d-14138f088356\") " pod="openstack/ceilometer-0" Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.308448 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng86j\" (UniqueName: \"kubernetes.io/projected/77749f2c-ba94-4459-8f4d-14138f088356-kube-api-access-ng86j\") pod \"ceilometer-0\" (UID: \"77749f2c-ba94-4459-8f4d-14138f088356\") " pod="openstack/ceilometer-0" Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.308868 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/77749f2c-ba94-4459-8f4d-14138f088356-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"77749f2c-ba94-4459-8f4d-14138f088356\") " pod="openstack/ceilometer-0" Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.308922 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77749f2c-ba94-4459-8f4d-14138f088356-config-data\") pod \"ceilometer-0\" (UID: \"77749f2c-ba94-4459-8f4d-14138f088356\") " pod="openstack/ceilometer-0" Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.308999 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77749f2c-ba94-4459-8f4d-14138f088356-scripts\") pod \"ceilometer-0\" (UID: \"77749f2c-ba94-4459-8f4d-14138f088356\") " pod="openstack/ceilometer-0" Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.410851 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77749f2c-ba94-4459-8f4d-14138f088356-scripts\") pod \"ceilometer-0\" (UID: \"77749f2c-ba94-4459-8f4d-14138f088356\") " pod="openstack/ceilometer-0" Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.410934 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77749f2c-ba94-4459-8f4d-14138f088356-log-httpd\") pod \"ceilometer-0\" (UID: \"77749f2c-ba94-4459-8f4d-14138f088356\") " pod="openstack/ceilometer-0" Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.410952 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77749f2c-ba94-4459-8f4d-14138f088356-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"77749f2c-ba94-4459-8f4d-14138f088356\") " pod="openstack/ceilometer-0" Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.410968 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77749f2c-ba94-4459-8f4d-14138f088356-run-httpd\") pod \"ceilometer-0\" (UID: \"77749f2c-ba94-4459-8f4d-14138f088356\") " pod="openstack/ceilometer-0" Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.411049 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng86j\" (UniqueName: \"kubernetes.io/projected/77749f2c-ba94-4459-8f4d-14138f088356-kube-api-access-ng86j\") pod \"ceilometer-0\" (UID: \"77749f2c-ba94-4459-8f4d-14138f088356\") " pod="openstack/ceilometer-0" Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.411110 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/77749f2c-ba94-4459-8f4d-14138f088356-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"77749f2c-ba94-4459-8f4d-14138f088356\") " pod="openstack/ceilometer-0" Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.411146 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77749f2c-ba94-4459-8f4d-14138f088356-config-data\") pod \"ceilometer-0\" (UID: \"77749f2c-ba94-4459-8f4d-14138f088356\") " pod="openstack/ceilometer-0" Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.412944 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77749f2c-ba94-4459-8f4d-14138f088356-log-httpd\") pod \"ceilometer-0\" (UID: \"77749f2c-ba94-4459-8f4d-14138f088356\") " pod="openstack/ceilometer-0" Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.413393 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77749f2c-ba94-4459-8f4d-14138f088356-run-httpd\") pod \"ceilometer-0\" (UID: \"77749f2c-ba94-4459-8f4d-14138f088356\") " pod="openstack/ceilometer-0" Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.420864 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77749f2c-ba94-4459-8f4d-14138f088356-scripts\") pod \"ceilometer-0\" (UID: \"77749f2c-ba94-4459-8f4d-14138f088356\") " pod="openstack/ceilometer-0" Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.423021 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77749f2c-ba94-4459-8f4d-14138f088356-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"77749f2c-ba94-4459-8f4d-14138f088356\") " pod="openstack/ceilometer-0" Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.427443 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77749f2c-ba94-4459-8f4d-14138f088356-config-data\") pod \"ceilometer-0\" (UID: \"77749f2c-ba94-4459-8f4d-14138f088356\") " pod="openstack/ceilometer-0" Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.428499 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/77749f2c-ba94-4459-8f4d-14138f088356-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"77749f2c-ba94-4459-8f4d-14138f088356\") " pod="openstack/ceilometer-0" Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.446722 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng86j\" (UniqueName: \"kubernetes.io/projected/77749f2c-ba94-4459-8f4d-14138f088356-kube-api-access-ng86j\") pod \"ceilometer-0\" (UID: \"77749f2c-ba94-4459-8f4d-14138f088356\") " pod="openstack/ceilometer-0" Feb 02 17:32:49 crc kubenswrapper[4858]: I0202 17:32:49.504999 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 17:32:50 crc kubenswrapper[4858]: I0202 17:32:50.008904 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:32:50 crc kubenswrapper[4858]: I0202 17:32:50.025236 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77749f2c-ba94-4459-8f4d-14138f088356","Type":"ContainerStarted","Data":"916fe6898ae11150bc24ae42d8742a693d163b1d37414a8dc58e077e10c799e1"} Feb 02 17:32:50 crc kubenswrapper[4858]: I0202 17:32:50.221551 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-857c87669d-c45h7" podUID="24d5a090-abc7-4832-b6c6-2e36edf7d82e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Feb 02 17:32:50 crc kubenswrapper[4858]: I0202 17:32:50.221687 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-857c87669d-c45h7" Feb 02 17:32:50 crc kubenswrapper[4858]: I0202 17:32:50.413680 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f84b369-07ee-4a29-8f3b-be71b0e37772" path="/var/lib/kubelet/pods/1f84b369-07ee-4a29-8f3b-be71b0e37772/volumes" Feb 02 17:32:50 crc kubenswrapper[4858]: I0202 17:32:50.414906 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac5f73e4-c510-46a1-a0a5-1f8291a58339" path="/var/lib/kubelet/pods/ac5f73e4-c510-46a1-a0a5-1f8291a58339/volumes" Feb 02 17:32:51 crc kubenswrapper[4858]: I0202 17:32:51.035532 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77749f2c-ba94-4459-8f4d-14138f088356","Type":"ContainerStarted","Data":"85ca06a3e77b83ef58f5635624514a131d64f8ea8aa0ec5fc2cb46ea0fed4e86"} Feb 02 17:32:52 crc kubenswrapper[4858]: I0202 17:32:52.043944 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77749f2c-ba94-4459-8f4d-14138f088356","Type":"ContainerStarted","Data":"f9b72186f00b33de6ecae862e784c593cd6f029397a0ba272462869aa81fd7c2"} Feb 02 17:32:52 crc kubenswrapper[4858]: I0202 17:32:52.680759 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 17:32:52 crc kubenswrapper[4858]: I0202 17:32:52.681234 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="773eb3f6-6091-4a9b-8eeb-dc0ab8a20758" containerName="glance-httpd" containerID="cri-o://52c97d980459381885891c9291b70b89a8bb9f8043655313073d4515c9fd8dc4" gracePeriod=30 Feb 02 17:32:52 crc kubenswrapper[4858]: I0202 17:32:52.681112 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="773eb3f6-6091-4a9b-8eeb-dc0ab8a20758" containerName="glance-log" containerID="cri-o://c0cf5ed62afd157997262f026987b02cb3dea0a4bdcd5c6b6535d7131d209119" gracePeriod=30 Feb 02 17:32:53 crc kubenswrapper[4858]: I0202 17:32:53.058504 4858 generic.go:334] "Generic (PLEG): container finished" podID="773eb3f6-6091-4a9b-8eeb-dc0ab8a20758" containerID="c0cf5ed62afd157997262f026987b02cb3dea0a4bdcd5c6b6535d7131d209119" exitCode=143 Feb 02 17:32:53 crc kubenswrapper[4858]: I0202 17:32:53.058546 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758","Type":"ContainerDied","Data":"c0cf5ed62afd157997262f026987b02cb3dea0a4bdcd5c6b6535d7131d209119"} Feb 02 17:32:53 crc kubenswrapper[4858]: I0202 17:32:53.469186 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 17:32:53 crc kubenswrapper[4858]: I0202 17:32:53.472378 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="11d650f7-3342-41ec-b78a-0f9cbbac4368" containerName="glance-log" containerID="cri-o://5a302d9bb61d57cdd42f0f0fff939f598b4f41cae1a2c0e95be1becc2ffb5f5f" gracePeriod=30 Feb 02 17:32:53 crc kubenswrapper[4858]: I0202 17:32:53.472782 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="11d650f7-3342-41ec-b78a-0f9cbbac4368" containerName="glance-httpd" containerID="cri-o://736d47da60f9d74e9eca323110304c22ad8f40b65dcf1136e9c4dae5183c7c3a" gracePeriod=30 Feb 02 17:32:54 crc kubenswrapper[4858]: I0202 17:32:54.069891 4858 generic.go:334] "Generic (PLEG): container finished" podID="11d650f7-3342-41ec-b78a-0f9cbbac4368" containerID="5a302d9bb61d57cdd42f0f0fff939f598b4f41cae1a2c0e95be1becc2ffb5f5f" exitCode=143 Feb 02 17:32:54 crc kubenswrapper[4858]: I0202 17:32:54.070202 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"11d650f7-3342-41ec-b78a-0f9cbbac4368","Type":"ContainerDied","Data":"5a302d9bb61d57cdd42f0f0fff939f598b4f41cae1a2c0e95be1becc2ffb5f5f"} Feb 02 17:32:55 crc kubenswrapper[4858]: I0202 17:32:55.099922 4858 generic.go:334] "Generic (PLEG): container finished" podID="24d5a090-abc7-4832-b6c6-2e36edf7d82e" containerID="4ea20cb217d595f212f7c30c0c9b8a9c83b72304dd0b30e106b284e161374882" exitCode=137 Feb 02 17:32:55 crc kubenswrapper[4858]: I0202 17:32:55.099983 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-857c87669d-c45h7" event={"ID":"24d5a090-abc7-4832-b6c6-2e36edf7d82e","Type":"ContainerDied","Data":"4ea20cb217d595f212f7c30c0c9b8a9c83b72304dd0b30e106b284e161374882"} Feb 02 17:32:55 crc kubenswrapper[4858]: I0202 17:32:55.105428 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.138052 4858 generic.go:334] "Generic (PLEG): container finished" podID="773eb3f6-6091-4a9b-8eeb-dc0ab8a20758" containerID="52c97d980459381885891c9291b70b89a8bb9f8043655313073d4515c9fd8dc4" exitCode=0 Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.138391 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758","Type":"ContainerDied","Data":"52c97d980459381885891c9291b70b89a8bb9f8043655313073d4515c9fd8dc4"} Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.420049 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-857c87669d-c45h7" Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.474783 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.542700 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/24d5a090-abc7-4832-b6c6-2e36edf7d82e-horizon-secret-key\") pod \"24d5a090-abc7-4832-b6c6-2e36edf7d82e\" (UID: \"24d5a090-abc7-4832-b6c6-2e36edf7d82e\") " Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.542780 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24d5a090-abc7-4832-b6c6-2e36edf7d82e-scripts\") pod \"24d5a090-abc7-4832-b6c6-2e36edf7d82e\" (UID: \"24d5a090-abc7-4832-b6c6-2e36edf7d82e\") " Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.542900 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d5a090-abc7-4832-b6c6-2e36edf7d82e-horizon-tls-certs\") pod \"24d5a090-abc7-4832-b6c6-2e36edf7d82e\" (UID: \"24d5a090-abc7-4832-b6c6-2e36edf7d82e\") " Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.542930 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d5a090-abc7-4832-b6c6-2e36edf7d82e-combined-ca-bundle\") pod \"24d5a090-abc7-4832-b6c6-2e36edf7d82e\" (UID: \"24d5a090-abc7-4832-b6c6-2e36edf7d82e\") " Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.542953 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/24d5a090-abc7-4832-b6c6-2e36edf7d82e-config-data\") pod \"24d5a090-abc7-4832-b6c6-2e36edf7d82e\" (UID: \"24d5a090-abc7-4832-b6c6-2e36edf7d82e\") " Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.542987 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24d5a090-abc7-4832-b6c6-2e36edf7d82e-logs\") pod \"24d5a090-abc7-4832-b6c6-2e36edf7d82e\" (UID: \"24d5a090-abc7-4832-b6c6-2e36edf7d82e\") " Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.543116 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6lwx\" (UniqueName: \"kubernetes.io/projected/24d5a090-abc7-4832-b6c6-2e36edf7d82e-kube-api-access-g6lwx\") pod \"24d5a090-abc7-4832-b6c6-2e36edf7d82e\" (UID: \"24d5a090-abc7-4832-b6c6-2e36edf7d82e\") " Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.543725 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24d5a090-abc7-4832-b6c6-2e36edf7d82e-logs" (OuterVolumeSpecName: "logs") pod "24d5a090-abc7-4832-b6c6-2e36edf7d82e" (UID: "24d5a090-abc7-4832-b6c6-2e36edf7d82e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.544235 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24d5a090-abc7-4832-b6c6-2e36edf7d82e-logs\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.553298 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24d5a090-abc7-4832-b6c6-2e36edf7d82e-kube-api-access-g6lwx" (OuterVolumeSpecName: "kube-api-access-g6lwx") pod "24d5a090-abc7-4832-b6c6-2e36edf7d82e" (UID: "24d5a090-abc7-4832-b6c6-2e36edf7d82e"). InnerVolumeSpecName "kube-api-access-g6lwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.553501 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24d5a090-abc7-4832-b6c6-2e36edf7d82e-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "24d5a090-abc7-4832-b6c6-2e36edf7d82e" (UID: "24d5a090-abc7-4832-b6c6-2e36edf7d82e"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.582529 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24d5a090-abc7-4832-b6c6-2e36edf7d82e-scripts" (OuterVolumeSpecName: "scripts") pod "24d5a090-abc7-4832-b6c6-2e36edf7d82e" (UID: "24d5a090-abc7-4832-b6c6-2e36edf7d82e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.584039 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24d5a090-abc7-4832-b6c6-2e36edf7d82e-config-data" (OuterVolumeSpecName: "config-data") pod "24d5a090-abc7-4832-b6c6-2e36edf7d82e" (UID: "24d5a090-abc7-4832-b6c6-2e36edf7d82e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.588099 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24d5a090-abc7-4832-b6c6-2e36edf7d82e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24d5a090-abc7-4832-b6c6-2e36edf7d82e" (UID: "24d5a090-abc7-4832-b6c6-2e36edf7d82e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.620884 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24d5a090-abc7-4832-b6c6-2e36edf7d82e-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "24d5a090-abc7-4832-b6c6-2e36edf7d82e" (UID: "24d5a090-abc7-4832-b6c6-2e36edf7d82e"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.645705 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/773eb3f6-6091-4a9b-8eeb-dc0ab8a20758-config-data\") pod \"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758\" (UID: \"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758\") " Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.645796 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/773eb3f6-6091-4a9b-8eeb-dc0ab8a20758-public-tls-certs\") pod \"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758\" (UID: \"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758\") " Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.645842 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q95xk\" (UniqueName: \"kubernetes.io/projected/773eb3f6-6091-4a9b-8eeb-dc0ab8a20758-kube-api-access-q95xk\") pod \"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758\" (UID: \"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758\") " Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.645906 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758\" (UID: \"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758\") " Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.645950 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/773eb3f6-6091-4a9b-8eeb-dc0ab8a20758-logs\") pod \"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758\" (UID: \"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758\") " Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.646116 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/773eb3f6-6091-4a9b-8eeb-dc0ab8a20758-combined-ca-bundle\") pod \"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758\" (UID: \"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758\") " Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.646153 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/773eb3f6-6091-4a9b-8eeb-dc0ab8a20758-scripts\") pod \"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758\" (UID: \"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758\") " Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.646174 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/773eb3f6-6091-4a9b-8eeb-dc0ab8a20758-httpd-run\") pod \"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758\" (UID: \"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758\") " Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.646645 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6lwx\" (UniqueName: \"kubernetes.io/projected/24d5a090-abc7-4832-b6c6-2e36edf7d82e-kube-api-access-g6lwx\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.646661 4858 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/24d5a090-abc7-4832-b6c6-2e36edf7d82e-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.646674 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24d5a090-abc7-4832-b6c6-2e36edf7d82e-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.646686 4858 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d5a090-abc7-4832-b6c6-2e36edf7d82e-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.646697 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d5a090-abc7-4832-b6c6-2e36edf7d82e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.646707 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/24d5a090-abc7-4832-b6c6-2e36edf7d82e-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.647253 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/773eb3f6-6091-4a9b-8eeb-dc0ab8a20758-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "773eb3f6-6091-4a9b-8eeb-dc0ab8a20758" (UID: "773eb3f6-6091-4a9b-8eeb-dc0ab8a20758"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.647875 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/773eb3f6-6091-4a9b-8eeb-dc0ab8a20758-logs" (OuterVolumeSpecName: "logs") pod "773eb3f6-6091-4a9b-8eeb-dc0ab8a20758" (UID: "773eb3f6-6091-4a9b-8eeb-dc0ab8a20758"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.658429 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "773eb3f6-6091-4a9b-8eeb-dc0ab8a20758" (UID: "773eb3f6-6091-4a9b-8eeb-dc0ab8a20758"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.660105 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/773eb3f6-6091-4a9b-8eeb-dc0ab8a20758-scripts" (OuterVolumeSpecName: "scripts") pod "773eb3f6-6091-4a9b-8eeb-dc0ab8a20758" (UID: "773eb3f6-6091-4a9b-8eeb-dc0ab8a20758"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.660239 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/773eb3f6-6091-4a9b-8eeb-dc0ab8a20758-kube-api-access-q95xk" (OuterVolumeSpecName: "kube-api-access-q95xk") pod "773eb3f6-6091-4a9b-8eeb-dc0ab8a20758" (UID: "773eb3f6-6091-4a9b-8eeb-dc0ab8a20758"). InnerVolumeSpecName "kube-api-access-q95xk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.698827 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/773eb3f6-6091-4a9b-8eeb-dc0ab8a20758-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "773eb3f6-6091-4a9b-8eeb-dc0ab8a20758" (UID: "773eb3f6-6091-4a9b-8eeb-dc0ab8a20758"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.702122 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/773eb3f6-6091-4a9b-8eeb-dc0ab8a20758-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "773eb3f6-6091-4a9b-8eeb-dc0ab8a20758" (UID: "773eb3f6-6091-4a9b-8eeb-dc0ab8a20758"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.717132 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/773eb3f6-6091-4a9b-8eeb-dc0ab8a20758-config-data" (OuterVolumeSpecName: "config-data") pod "773eb3f6-6091-4a9b-8eeb-dc0ab8a20758" (UID: "773eb3f6-6091-4a9b-8eeb-dc0ab8a20758"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.748486 4858 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/773eb3f6-6091-4a9b-8eeb-dc0ab8a20758-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.748527 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q95xk\" (UniqueName: \"kubernetes.io/projected/773eb3f6-6091-4a9b-8eeb-dc0ab8a20758-kube-api-access-q95xk\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.748574 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.748587 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/773eb3f6-6091-4a9b-8eeb-dc0ab8a20758-logs\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.748597 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/773eb3f6-6091-4a9b-8eeb-dc0ab8a20758-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.748607 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/773eb3f6-6091-4a9b-8eeb-dc0ab8a20758-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.748616 4858 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/773eb3f6-6091-4a9b-8eeb-dc0ab8a20758-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.748628 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/773eb3f6-6091-4a9b-8eeb-dc0ab8a20758-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.783320 4858 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 02 17:32:56 crc kubenswrapper[4858]: I0202 17:32:56.850848 4858 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.148750 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.149235 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-66l2r" event={"ID":"f3a30ec7-1686-4aa1-b365-1d0516dda2eb","Type":"ContainerStarted","Data":"240802e54634b3f98bd4efe13b3e6f7f586deaf5aaadcb30f2f96a94d7ffd69d"} Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.153243 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"773eb3f6-6091-4a9b-8eeb-dc0ab8a20758","Type":"ContainerDied","Data":"d6a6c2dc6eef44b7f69730d02cb56c1ed0247c57ff77e6ebcfd7fb3f79479b1b"} Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.153405 4858 scope.go:117] "RemoveContainer" containerID="52c97d980459381885891c9291b70b89a8bb9f8043655313073d4515c9fd8dc4" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.153308 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.156383 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77749f2c-ba94-4459-8f4d-14138f088356","Type":"ContainerStarted","Data":"eb40c99759963f61d2ac95460d7f10f173cc46fa102993a4974dfa900525e764"} Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.159678 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-857c87669d-c45h7" event={"ID":"24d5a090-abc7-4832-b6c6-2e36edf7d82e","Type":"ContainerDied","Data":"1e1d5f39bde70462a2beb4ccf10aa9808ecd8ca0b7ba97f47016923341aa81a1"} Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.159868 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-857c87669d-c45h7" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.171878 4858 generic.go:334] "Generic (PLEG): container finished" podID="11d650f7-3342-41ec-b78a-0f9cbbac4368" containerID="736d47da60f9d74e9eca323110304c22ad8f40b65dcf1136e9c4dae5183c7c3a" exitCode=0 Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.171926 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"11d650f7-3342-41ec-b78a-0f9cbbac4368","Type":"ContainerDied","Data":"736d47da60f9d74e9eca323110304c22ad8f40b65dcf1136e9c4dae5183c7c3a"} Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.171965 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"11d650f7-3342-41ec-b78a-0f9cbbac4368","Type":"ContainerDied","Data":"00d14742213a18e37fd7b011e3c76d7b420102ae5c46ea9e6d24581be14647f0"} Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.172039 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.248768 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.250636 4858 scope.go:117] "RemoveContainer" containerID="c0cf5ed62afd157997262f026987b02cb3dea0a4bdcd5c6b6535d7131d209119" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.258941 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rv66f\" (UniqueName: \"kubernetes.io/projected/11d650f7-3342-41ec-b78a-0f9cbbac4368-kube-api-access-rv66f\") pod \"11d650f7-3342-41ec-b78a-0f9cbbac4368\" (UID: \"11d650f7-3342-41ec-b78a-0f9cbbac4368\") " Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.259028 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"11d650f7-3342-41ec-b78a-0f9cbbac4368\" (UID: \"11d650f7-3342-41ec-b78a-0f9cbbac4368\") " Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.259110 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/11d650f7-3342-41ec-b78a-0f9cbbac4368-httpd-run\") pod \"11d650f7-3342-41ec-b78a-0f9cbbac4368\" (UID: \"11d650f7-3342-41ec-b78a-0f9cbbac4368\") " Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.259151 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11d650f7-3342-41ec-b78a-0f9cbbac4368-logs\") pod \"11d650f7-3342-41ec-b78a-0f9cbbac4368\" (UID: \"11d650f7-3342-41ec-b78a-0f9cbbac4368\") " Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.259190 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11d650f7-3342-41ec-b78a-0f9cbbac4368-combined-ca-bundle\") pod \"11d650f7-3342-41ec-b78a-0f9cbbac4368\" (UID: \"11d650f7-3342-41ec-b78a-0f9cbbac4368\") " Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.259213 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11d650f7-3342-41ec-b78a-0f9cbbac4368-config-data\") pod \"11d650f7-3342-41ec-b78a-0f9cbbac4368\" (UID: \"11d650f7-3342-41ec-b78a-0f9cbbac4368\") " Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.259263 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/11d650f7-3342-41ec-b78a-0f9cbbac4368-internal-tls-certs\") pod \"11d650f7-3342-41ec-b78a-0f9cbbac4368\" (UID: \"11d650f7-3342-41ec-b78a-0f9cbbac4368\") " Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.259316 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11d650f7-3342-41ec-b78a-0f9cbbac4368-scripts\") pod \"11d650f7-3342-41ec-b78a-0f9cbbac4368\" (UID: \"11d650f7-3342-41ec-b78a-0f9cbbac4368\") " Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.259831 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11d650f7-3342-41ec-b78a-0f9cbbac4368-logs" (OuterVolumeSpecName: "logs") pod "11d650f7-3342-41ec-b78a-0f9cbbac4368" (UID: "11d650f7-3342-41ec-b78a-0f9cbbac4368"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.259949 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11d650f7-3342-41ec-b78a-0f9cbbac4368-logs\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.260165 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11d650f7-3342-41ec-b78a-0f9cbbac4368-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "11d650f7-3342-41ec-b78a-0f9cbbac4368" (UID: "11d650f7-3342-41ec-b78a-0f9cbbac4368"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.304225 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.304868 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11d650f7-3342-41ec-b78a-0f9cbbac4368-kube-api-access-rv66f" (OuterVolumeSpecName: "kube-api-access-rv66f") pod "11d650f7-3342-41ec-b78a-0f9cbbac4368" (UID: "11d650f7-3342-41ec-b78a-0f9cbbac4368"). InnerVolumeSpecName "kube-api-access-rv66f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.305117 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11d650f7-3342-41ec-b78a-0f9cbbac4368-scripts" (OuterVolumeSpecName: "scripts") pod "11d650f7-3342-41ec-b78a-0f9cbbac4368" (UID: "11d650f7-3342-41ec-b78a-0f9cbbac4368"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.309140 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "11d650f7-3342-41ec-b78a-0f9cbbac4368" (UID: "11d650f7-3342-41ec-b78a-0f9cbbac4368"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.317169 4858 scope.go:117] "RemoveContainer" containerID="51444d04afc916f2112443d8aa3f3ff3ae56b53e450edd0dc4f8f72fcd2a1a61" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.334698 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-66l2r" podStartSLOduration=2.500595499 podStartE2EDuration="10.334677993s" podCreationTimestamp="2026-02-02 17:32:47 +0000 UTC" firstStartedPulling="2026-02-02 17:32:48.199136295 +0000 UTC m=+1069.351551560" lastFinishedPulling="2026-02-02 17:32:56.033218769 +0000 UTC m=+1077.185634054" observedRunningTime="2026-02-02 17:32:57.229880587 +0000 UTC m=+1078.382295852" watchObservedRunningTime="2026-02-02 17:32:57.334677993 +0000 UTC m=+1078.487093258" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.342192 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11d650f7-3342-41ec-b78a-0f9cbbac4368-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11d650f7-3342-41ec-b78a-0f9cbbac4368" (UID: "11d650f7-3342-41ec-b78a-0f9cbbac4368"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.344615 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 17:32:57 crc kubenswrapper[4858]: E0202 17:32:57.345133 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11d650f7-3342-41ec-b78a-0f9cbbac4368" containerName="glance-log" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.345154 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="11d650f7-3342-41ec-b78a-0f9cbbac4368" containerName="glance-log" Feb 02 17:32:57 crc kubenswrapper[4858]: E0202 17:32:57.345169 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="773eb3f6-6091-4a9b-8eeb-dc0ab8a20758" containerName="glance-httpd" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.345177 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="773eb3f6-6091-4a9b-8eeb-dc0ab8a20758" containerName="glance-httpd" Feb 02 17:32:57 crc kubenswrapper[4858]: E0202 17:32:57.345193 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24d5a090-abc7-4832-b6c6-2e36edf7d82e" containerName="horizon" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.345203 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="24d5a090-abc7-4832-b6c6-2e36edf7d82e" containerName="horizon" Feb 02 17:32:57 crc kubenswrapper[4858]: E0202 17:32:57.345214 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24d5a090-abc7-4832-b6c6-2e36edf7d82e" containerName="horizon-log" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.345221 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="24d5a090-abc7-4832-b6c6-2e36edf7d82e" containerName="horizon-log" Feb 02 17:32:57 crc kubenswrapper[4858]: E0202 17:32:57.345250 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11d650f7-3342-41ec-b78a-0f9cbbac4368" containerName="glance-httpd" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.345257 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="11d650f7-3342-41ec-b78a-0f9cbbac4368" containerName="glance-httpd" Feb 02 17:32:57 crc kubenswrapper[4858]: E0202 17:32:57.345271 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="773eb3f6-6091-4a9b-8eeb-dc0ab8a20758" containerName="glance-log" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.345279 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="773eb3f6-6091-4a9b-8eeb-dc0ab8a20758" containerName="glance-log" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.345494 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="24d5a090-abc7-4832-b6c6-2e36edf7d82e" containerName="horizon-log" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.345506 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="773eb3f6-6091-4a9b-8eeb-dc0ab8a20758" containerName="glance-log" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.345517 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="11d650f7-3342-41ec-b78a-0f9cbbac4368" containerName="glance-log" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.345540 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="773eb3f6-6091-4a9b-8eeb-dc0ab8a20758" containerName="glance-httpd" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.345554 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="24d5a090-abc7-4832-b6c6-2e36edf7d82e" containerName="horizon" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.345571 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="11d650f7-3342-41ec-b78a-0f9cbbac4368" containerName="glance-httpd" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.346841 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.350160 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.350356 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.352957 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.360525 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-857c87669d-c45h7"] Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.365270 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.365445 4858 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/11d650f7-3342-41ec-b78a-0f9cbbac4368-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.365521 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11d650f7-3342-41ec-b78a-0f9cbbac4368-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.365581 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11d650f7-3342-41ec-b78a-0f9cbbac4368-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.366381 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rv66f\" (UniqueName: \"kubernetes.io/projected/11d650f7-3342-41ec-b78a-0f9cbbac4368-kube-api-access-rv66f\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.396594 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-857c87669d-c45h7"] Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.412257 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11d650f7-3342-41ec-b78a-0f9cbbac4368-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "11d650f7-3342-41ec-b78a-0f9cbbac4368" (UID: "11d650f7-3342-41ec-b78a-0f9cbbac4368"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.444614 4858 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.448271 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11d650f7-3342-41ec-b78a-0f9cbbac4368-config-data" (OuterVolumeSpecName: "config-data") pod "11d650f7-3342-41ec-b78a-0f9cbbac4368" (UID: "11d650f7-3342-41ec-b78a-0f9cbbac4368"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.467935 4858 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.468067 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11d650f7-3342-41ec-b78a-0f9cbbac4368-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.468084 4858 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/11d650f7-3342-41ec-b78a-0f9cbbac4368-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.550930 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.551150 4858 scope.go:117] "RemoveContainer" containerID="4ea20cb217d595f212f7c30c0c9b8a9c83b72304dd0b30e106b284e161374882" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.569198 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44559c36-6bc9-41d7-810f-f68bb1ed9d18-scripts\") pod \"glance-default-external-api-0\" (UID: \"44559c36-6bc9-41d7-810f-f68bb1ed9d18\") " pod="openstack/glance-default-external-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.569537 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/44559c36-6bc9-41d7-810f-f68bb1ed9d18-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"44559c36-6bc9-41d7-810f-f68bb1ed9d18\") " pod="openstack/glance-default-external-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.569663 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wz8g\" (UniqueName: \"kubernetes.io/projected/44559c36-6bc9-41d7-810f-f68bb1ed9d18-kube-api-access-7wz8g\") pod \"glance-default-external-api-0\" (UID: \"44559c36-6bc9-41d7-810f-f68bb1ed9d18\") " pod="openstack/glance-default-external-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.569769 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44559c36-6bc9-41d7-810f-f68bb1ed9d18-logs\") pod \"glance-default-external-api-0\" (UID: \"44559c36-6bc9-41d7-810f-f68bb1ed9d18\") " pod="openstack/glance-default-external-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.569902 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44559c36-6bc9-41d7-810f-f68bb1ed9d18-config-data\") pod \"glance-default-external-api-0\" (UID: \"44559c36-6bc9-41d7-810f-f68bb1ed9d18\") " pod="openstack/glance-default-external-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.570036 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/44559c36-6bc9-41d7-810f-f68bb1ed9d18-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"44559c36-6bc9-41d7-810f-f68bb1ed9d18\") " pod="openstack/glance-default-external-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.570192 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44559c36-6bc9-41d7-810f-f68bb1ed9d18-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"44559c36-6bc9-41d7-810f-f68bb1ed9d18\") " pod="openstack/glance-default-external-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.570297 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"44559c36-6bc9-41d7-810f-f68bb1ed9d18\") " pod="openstack/glance-default-external-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.583121 4858 scope.go:117] "RemoveContainer" containerID="736d47da60f9d74e9eca323110304c22ad8f40b65dcf1136e9c4dae5183c7c3a" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.593422 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.603286 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.607331 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.609573 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.609753 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.619069 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 17:32:57 crc kubenswrapper[4858]: E0202 17:32:57.623056 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11d650f7_3342_41ec_b78a_0f9cbbac4368.slice\": RecentStats: unable to find data in memory cache]" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.662431 4858 scope.go:117] "RemoveContainer" containerID="5a302d9bb61d57cdd42f0f0fff939f598b4f41cae1a2c0e95be1becc2ffb5f5f" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.674136 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wz8g\" (UniqueName: \"kubernetes.io/projected/44559c36-6bc9-41d7-810f-f68bb1ed9d18-kube-api-access-7wz8g\") pod \"glance-default-external-api-0\" (UID: \"44559c36-6bc9-41d7-810f-f68bb1ed9d18\") " pod="openstack/glance-default-external-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.674190 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44559c36-6bc9-41d7-810f-f68bb1ed9d18-logs\") pod \"glance-default-external-api-0\" (UID: \"44559c36-6bc9-41d7-810f-f68bb1ed9d18\") " pod="openstack/glance-default-external-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.674243 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44559c36-6bc9-41d7-810f-f68bb1ed9d18-config-data\") pod \"glance-default-external-api-0\" (UID: \"44559c36-6bc9-41d7-810f-f68bb1ed9d18\") " pod="openstack/glance-default-external-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.674297 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/44559c36-6bc9-41d7-810f-f68bb1ed9d18-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"44559c36-6bc9-41d7-810f-f68bb1ed9d18\") " pod="openstack/glance-default-external-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.674374 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44559c36-6bc9-41d7-810f-f68bb1ed9d18-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"44559c36-6bc9-41d7-810f-f68bb1ed9d18\") " pod="openstack/glance-default-external-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.674409 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"44559c36-6bc9-41d7-810f-f68bb1ed9d18\") " pod="openstack/glance-default-external-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.674457 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44559c36-6bc9-41d7-810f-f68bb1ed9d18-scripts\") pod \"glance-default-external-api-0\" (UID: \"44559c36-6bc9-41d7-810f-f68bb1ed9d18\") " pod="openstack/glance-default-external-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.674528 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/44559c36-6bc9-41d7-810f-f68bb1ed9d18-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"44559c36-6bc9-41d7-810f-f68bb1ed9d18\") " pod="openstack/glance-default-external-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.674862 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44559c36-6bc9-41d7-810f-f68bb1ed9d18-logs\") pod \"glance-default-external-api-0\" (UID: \"44559c36-6bc9-41d7-810f-f68bb1ed9d18\") " pod="openstack/glance-default-external-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.675059 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/44559c36-6bc9-41d7-810f-f68bb1ed9d18-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"44559c36-6bc9-41d7-810f-f68bb1ed9d18\") " pod="openstack/glance-default-external-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.675047 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"44559c36-6bc9-41d7-810f-f68bb1ed9d18\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.678928 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44559c36-6bc9-41d7-810f-f68bb1ed9d18-scripts\") pod \"glance-default-external-api-0\" (UID: \"44559c36-6bc9-41d7-810f-f68bb1ed9d18\") " pod="openstack/glance-default-external-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.680486 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/44559c36-6bc9-41d7-810f-f68bb1ed9d18-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"44559c36-6bc9-41d7-810f-f68bb1ed9d18\") " pod="openstack/glance-default-external-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.683503 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44559c36-6bc9-41d7-810f-f68bb1ed9d18-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"44559c36-6bc9-41d7-810f-f68bb1ed9d18\") " pod="openstack/glance-default-external-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.683552 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44559c36-6bc9-41d7-810f-f68bb1ed9d18-config-data\") pod \"glance-default-external-api-0\" (UID: \"44559c36-6bc9-41d7-810f-f68bb1ed9d18\") " pod="openstack/glance-default-external-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.687377 4858 scope.go:117] "RemoveContainer" containerID="736d47da60f9d74e9eca323110304c22ad8f40b65dcf1136e9c4dae5183c7c3a" Feb 02 17:32:57 crc kubenswrapper[4858]: E0202 17:32:57.688391 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"736d47da60f9d74e9eca323110304c22ad8f40b65dcf1136e9c4dae5183c7c3a\": container with ID starting with 736d47da60f9d74e9eca323110304c22ad8f40b65dcf1136e9c4dae5183c7c3a not found: ID does not exist" containerID="736d47da60f9d74e9eca323110304c22ad8f40b65dcf1136e9c4dae5183c7c3a" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.688423 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"736d47da60f9d74e9eca323110304c22ad8f40b65dcf1136e9c4dae5183c7c3a"} err="failed to get container status \"736d47da60f9d74e9eca323110304c22ad8f40b65dcf1136e9c4dae5183c7c3a\": rpc error: code = NotFound desc = could not find container \"736d47da60f9d74e9eca323110304c22ad8f40b65dcf1136e9c4dae5183c7c3a\": container with ID starting with 736d47da60f9d74e9eca323110304c22ad8f40b65dcf1136e9c4dae5183c7c3a not found: ID does not exist" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.688445 4858 scope.go:117] "RemoveContainer" containerID="5a302d9bb61d57cdd42f0f0fff939f598b4f41cae1a2c0e95be1becc2ffb5f5f" Feb 02 17:32:57 crc kubenswrapper[4858]: E0202 17:32:57.692410 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a302d9bb61d57cdd42f0f0fff939f598b4f41cae1a2c0e95be1becc2ffb5f5f\": container with ID starting with 5a302d9bb61d57cdd42f0f0fff939f598b4f41cae1a2c0e95be1becc2ffb5f5f not found: ID does not exist" containerID="5a302d9bb61d57cdd42f0f0fff939f598b4f41cae1a2c0e95be1becc2ffb5f5f" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.692464 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a302d9bb61d57cdd42f0f0fff939f598b4f41cae1a2c0e95be1becc2ffb5f5f"} err="failed to get container status \"5a302d9bb61d57cdd42f0f0fff939f598b4f41cae1a2c0e95be1becc2ffb5f5f\": rpc error: code = NotFound desc = could not find container \"5a302d9bb61d57cdd42f0f0fff939f598b4f41cae1a2c0e95be1becc2ffb5f5f\": container with ID starting with 5a302d9bb61d57cdd42f0f0fff939f598b4f41cae1a2c0e95be1becc2ffb5f5f not found: ID does not exist" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.696601 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wz8g\" (UniqueName: \"kubernetes.io/projected/44559c36-6bc9-41d7-810f-f68bb1ed9d18-kube-api-access-7wz8g\") pod \"glance-default-external-api-0\" (UID: \"44559c36-6bc9-41d7-810f-f68bb1ed9d18\") " pod="openstack/glance-default-external-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.722754 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"44559c36-6bc9-41d7-810f-f68bb1ed9d18\") " pod="openstack/glance-default-external-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.777431 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"15e52f85-8dc6-46f7-8844-701c3e76839c\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.777485 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/15e52f85-8dc6-46f7-8844-701c3e76839c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"15e52f85-8dc6-46f7-8844-701c3e76839c\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.777515 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15e52f85-8dc6-46f7-8844-701c3e76839c-logs\") pod \"glance-default-internal-api-0\" (UID: \"15e52f85-8dc6-46f7-8844-701c3e76839c\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.777543 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/15e52f85-8dc6-46f7-8844-701c3e76839c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"15e52f85-8dc6-46f7-8844-701c3e76839c\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.777580 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15e52f85-8dc6-46f7-8844-701c3e76839c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"15e52f85-8dc6-46f7-8844-701c3e76839c\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.777624 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15e52f85-8dc6-46f7-8844-701c3e76839c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"15e52f85-8dc6-46f7-8844-701c3e76839c\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.777671 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq5mt\" (UniqueName: \"kubernetes.io/projected/15e52f85-8dc6-46f7-8844-701c3e76839c-kube-api-access-wq5mt\") pod \"glance-default-internal-api-0\" (UID: \"15e52f85-8dc6-46f7-8844-701c3e76839c\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.777686 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15e52f85-8dc6-46f7-8844-701c3e76839c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"15e52f85-8dc6-46f7-8844-701c3e76839c\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.879157 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15e52f85-8dc6-46f7-8844-701c3e76839c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"15e52f85-8dc6-46f7-8844-701c3e76839c\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.879235 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq5mt\" (UniqueName: \"kubernetes.io/projected/15e52f85-8dc6-46f7-8844-701c3e76839c-kube-api-access-wq5mt\") pod \"glance-default-internal-api-0\" (UID: \"15e52f85-8dc6-46f7-8844-701c3e76839c\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.879265 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15e52f85-8dc6-46f7-8844-701c3e76839c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"15e52f85-8dc6-46f7-8844-701c3e76839c\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.879352 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"15e52f85-8dc6-46f7-8844-701c3e76839c\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.879374 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/15e52f85-8dc6-46f7-8844-701c3e76839c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"15e52f85-8dc6-46f7-8844-701c3e76839c\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.879411 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15e52f85-8dc6-46f7-8844-701c3e76839c-logs\") pod \"glance-default-internal-api-0\" (UID: \"15e52f85-8dc6-46f7-8844-701c3e76839c\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.879448 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/15e52f85-8dc6-46f7-8844-701c3e76839c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"15e52f85-8dc6-46f7-8844-701c3e76839c\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.879501 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15e52f85-8dc6-46f7-8844-701c3e76839c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"15e52f85-8dc6-46f7-8844-701c3e76839c\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.880095 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"15e52f85-8dc6-46f7-8844-701c3e76839c\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.880285 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15e52f85-8dc6-46f7-8844-701c3e76839c-logs\") pod \"glance-default-internal-api-0\" (UID: \"15e52f85-8dc6-46f7-8844-701c3e76839c\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.880513 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/15e52f85-8dc6-46f7-8844-701c3e76839c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"15e52f85-8dc6-46f7-8844-701c3e76839c\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.886916 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15e52f85-8dc6-46f7-8844-701c3e76839c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"15e52f85-8dc6-46f7-8844-701c3e76839c\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.887190 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15e52f85-8dc6-46f7-8844-701c3e76839c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"15e52f85-8dc6-46f7-8844-701c3e76839c\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.887632 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/15e52f85-8dc6-46f7-8844-701c3e76839c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"15e52f85-8dc6-46f7-8844-701c3e76839c\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.897357 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15e52f85-8dc6-46f7-8844-701c3e76839c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"15e52f85-8dc6-46f7-8844-701c3e76839c\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.904133 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq5mt\" (UniqueName: \"kubernetes.io/projected/15e52f85-8dc6-46f7-8844-701c3e76839c-kube-api-access-wq5mt\") pod \"glance-default-internal-api-0\" (UID: \"15e52f85-8dc6-46f7-8844-701c3e76839c\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.907455 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"15e52f85-8dc6-46f7-8844-701c3e76839c\") " pod="openstack/glance-default-internal-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.966544 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 17:32:57 crc kubenswrapper[4858]: I0202 17:32:57.976786 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 17:32:58 crc kubenswrapper[4858]: I0202 17:32:58.424799 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11d650f7-3342-41ec-b78a-0f9cbbac4368" path="/var/lib/kubelet/pods/11d650f7-3342-41ec-b78a-0f9cbbac4368/volumes" Feb 02 17:32:58 crc kubenswrapper[4858]: I0202 17:32:58.425626 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24d5a090-abc7-4832-b6c6-2e36edf7d82e" path="/var/lib/kubelet/pods/24d5a090-abc7-4832-b6c6-2e36edf7d82e/volumes" Feb 02 17:32:58 crc kubenswrapper[4858]: I0202 17:32:58.426398 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="773eb3f6-6091-4a9b-8eeb-dc0ab8a20758" path="/var/lib/kubelet/pods/773eb3f6-6091-4a9b-8eeb-dc0ab8a20758/volumes" Feb 02 17:32:58 crc kubenswrapper[4858]: I0202 17:32:58.629171 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 17:32:58 crc kubenswrapper[4858]: W0202 17:32:58.646203 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15e52f85_8dc6_46f7_8844_701c3e76839c.slice/crio-19a5b1ecda794f34388f207a9d3a727471f12578e5bba30d2c46c4385f773218 WatchSource:0}: Error finding container 19a5b1ecda794f34388f207a9d3a727471f12578e5bba30d2c46c4385f773218: Status 404 returned error can't find the container with id 19a5b1ecda794f34388f207a9d3a727471f12578e5bba30d2c46c4385f773218 Feb 02 17:32:58 crc kubenswrapper[4858]: I0202 17:32:58.717513 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 17:32:59 crc kubenswrapper[4858]: I0202 17:32:59.230760 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"15e52f85-8dc6-46f7-8844-701c3e76839c","Type":"ContainerStarted","Data":"19a5b1ecda794f34388f207a9d3a727471f12578e5bba30d2c46c4385f773218"} Feb 02 17:32:59 crc kubenswrapper[4858]: I0202 17:32:59.232008 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"44559c36-6bc9-41d7-810f-f68bb1ed9d18","Type":"ContainerStarted","Data":"0e839141058925151f2959a37a583318746d90f28ff0052222eb3bf0ef7a2a1e"} Feb 02 17:32:59 crc kubenswrapper[4858]: I0202 17:32:59.237869 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77749f2c-ba94-4459-8f4d-14138f088356","Type":"ContainerStarted","Data":"c8027205a8f2f6dfc1726bd436d15e638a4d52b3f19a1c36dc3d60342f8df68e"} Feb 02 17:32:59 crc kubenswrapper[4858]: I0202 17:32:59.238014 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 17:32:59 crc kubenswrapper[4858]: I0202 17:32:59.238016 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="77749f2c-ba94-4459-8f4d-14138f088356" containerName="ceilometer-central-agent" containerID="cri-o://85ca06a3e77b83ef58f5635624514a131d64f8ea8aa0ec5fc2cb46ea0fed4e86" gracePeriod=30 Feb 02 17:32:59 crc kubenswrapper[4858]: I0202 17:32:59.238095 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="77749f2c-ba94-4459-8f4d-14138f088356" containerName="proxy-httpd" containerID="cri-o://c8027205a8f2f6dfc1726bd436d15e638a4d52b3f19a1c36dc3d60342f8df68e" gracePeriod=30 Feb 02 17:32:59 crc kubenswrapper[4858]: I0202 17:32:59.238131 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="77749f2c-ba94-4459-8f4d-14138f088356" containerName="sg-core" containerID="cri-o://eb40c99759963f61d2ac95460d7f10f173cc46fa102993a4974dfa900525e764" gracePeriod=30 Feb 02 17:32:59 crc kubenswrapper[4858]: I0202 17:32:59.238165 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="77749f2c-ba94-4459-8f4d-14138f088356" containerName="ceilometer-notification-agent" containerID="cri-o://f9b72186f00b33de6ecae862e784c593cd6f029397a0ba272462869aa81fd7c2" gracePeriod=30 Feb 02 17:32:59 crc kubenswrapper[4858]: I0202 17:32:59.272503 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.773086886 podStartE2EDuration="10.27247563s" podCreationTimestamp="2026-02-02 17:32:49 +0000 UTC" firstStartedPulling="2026-02-02 17:32:50.01484086 +0000 UTC m=+1071.167256125" lastFinishedPulling="2026-02-02 17:32:58.514229604 +0000 UTC m=+1079.666644869" observedRunningTime="2026-02-02 17:32:59.26442542 +0000 UTC m=+1080.416840705" watchObservedRunningTime="2026-02-02 17:32:59.27247563 +0000 UTC m=+1080.424890905" Feb 02 17:33:00 crc kubenswrapper[4858]: I0202 17:33:00.250707 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"15e52f85-8dc6-46f7-8844-701c3e76839c","Type":"ContainerStarted","Data":"c1a83ea588c72e067ab2579effb0f5fce5b3358ed7c7cce60bac21b416e97beb"} Feb 02 17:33:00 crc kubenswrapper[4858]: I0202 17:33:00.252277 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"15e52f85-8dc6-46f7-8844-701c3e76839c","Type":"ContainerStarted","Data":"ab3356b2e627e129572d9c351bab89e1f5db2e7068236111c5a316f3e3beaf57"} Feb 02 17:33:00 crc kubenswrapper[4858]: I0202 17:33:00.254389 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"44559c36-6bc9-41d7-810f-f68bb1ed9d18","Type":"ContainerStarted","Data":"722d19ac30a61ec0e1ac9a7ed8cdc8297934115bb8827b777926772c65eef506"} Feb 02 17:33:00 crc kubenswrapper[4858]: I0202 17:33:00.254425 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"44559c36-6bc9-41d7-810f-f68bb1ed9d18","Type":"ContainerStarted","Data":"64bb9f87055bcbcd8bba8745b0b25425b339aaee00f3975bb60d7acc9a79df38"} Feb 02 17:33:00 crc kubenswrapper[4858]: I0202 17:33:00.258305 4858 generic.go:334] "Generic (PLEG): container finished" podID="77749f2c-ba94-4459-8f4d-14138f088356" containerID="c8027205a8f2f6dfc1726bd436d15e638a4d52b3f19a1c36dc3d60342f8df68e" exitCode=0 Feb 02 17:33:00 crc kubenswrapper[4858]: I0202 17:33:00.258331 4858 generic.go:334] "Generic (PLEG): container finished" podID="77749f2c-ba94-4459-8f4d-14138f088356" containerID="eb40c99759963f61d2ac95460d7f10f173cc46fa102993a4974dfa900525e764" exitCode=2 Feb 02 17:33:00 crc kubenswrapper[4858]: I0202 17:33:00.258338 4858 generic.go:334] "Generic (PLEG): container finished" podID="77749f2c-ba94-4459-8f4d-14138f088356" containerID="f9b72186f00b33de6ecae862e784c593cd6f029397a0ba272462869aa81fd7c2" exitCode=0 Feb 02 17:33:00 crc kubenswrapper[4858]: I0202 17:33:00.258358 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77749f2c-ba94-4459-8f4d-14138f088356","Type":"ContainerDied","Data":"c8027205a8f2f6dfc1726bd436d15e638a4d52b3f19a1c36dc3d60342f8df68e"} Feb 02 17:33:00 crc kubenswrapper[4858]: I0202 17:33:00.258379 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77749f2c-ba94-4459-8f4d-14138f088356","Type":"ContainerDied","Data":"eb40c99759963f61d2ac95460d7f10f173cc46fa102993a4974dfa900525e764"} Feb 02 17:33:00 crc kubenswrapper[4858]: I0202 17:33:00.258389 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77749f2c-ba94-4459-8f4d-14138f088356","Type":"ContainerDied","Data":"f9b72186f00b33de6ecae862e784c593cd6f029397a0ba272462869aa81fd7c2"} Feb 02 17:33:00 crc kubenswrapper[4858]: I0202 17:33:00.293107 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.293052045 podStartE2EDuration="3.293052045s" podCreationTimestamp="2026-02-02 17:32:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:33:00.273567668 +0000 UTC m=+1081.425982933" watchObservedRunningTime="2026-02-02 17:33:00.293052045 +0000 UTC m=+1081.445467310" Feb 02 17:33:00 crc kubenswrapper[4858]: I0202 17:33:00.306533 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.30651195 podStartE2EDuration="3.30651195s" podCreationTimestamp="2026-02-02 17:32:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:33:00.295225338 +0000 UTC m=+1081.447640623" watchObservedRunningTime="2026-02-02 17:33:00.30651195 +0000 UTC m=+1081.458927215" Feb 02 17:33:03 crc kubenswrapper[4858]: I0202 17:33:03.831715 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 17:33:03 crc kubenswrapper[4858]: I0202 17:33:03.913214 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77749f2c-ba94-4459-8f4d-14138f088356-log-httpd\") pod \"77749f2c-ba94-4459-8f4d-14138f088356\" (UID: \"77749f2c-ba94-4459-8f4d-14138f088356\") " Feb 02 17:33:03 crc kubenswrapper[4858]: I0202 17:33:03.913317 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77749f2c-ba94-4459-8f4d-14138f088356-scripts\") pod \"77749f2c-ba94-4459-8f4d-14138f088356\" (UID: \"77749f2c-ba94-4459-8f4d-14138f088356\") " Feb 02 17:33:03 crc kubenswrapper[4858]: I0202 17:33:03.913354 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77749f2c-ba94-4459-8f4d-14138f088356-combined-ca-bundle\") pod \"77749f2c-ba94-4459-8f4d-14138f088356\" (UID: \"77749f2c-ba94-4459-8f4d-14138f088356\") " Feb 02 17:33:03 crc kubenswrapper[4858]: I0202 17:33:03.913372 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ng86j\" (UniqueName: \"kubernetes.io/projected/77749f2c-ba94-4459-8f4d-14138f088356-kube-api-access-ng86j\") pod \"77749f2c-ba94-4459-8f4d-14138f088356\" (UID: \"77749f2c-ba94-4459-8f4d-14138f088356\") " Feb 02 17:33:03 crc kubenswrapper[4858]: I0202 17:33:03.913409 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/77749f2c-ba94-4459-8f4d-14138f088356-sg-core-conf-yaml\") pod \"77749f2c-ba94-4459-8f4d-14138f088356\" (UID: \"77749f2c-ba94-4459-8f4d-14138f088356\") " Feb 02 17:33:03 crc kubenswrapper[4858]: I0202 17:33:03.913440 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77749f2c-ba94-4459-8f4d-14138f088356-run-httpd\") pod \"77749f2c-ba94-4459-8f4d-14138f088356\" (UID: \"77749f2c-ba94-4459-8f4d-14138f088356\") " Feb 02 17:33:03 crc kubenswrapper[4858]: I0202 17:33:03.913492 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77749f2c-ba94-4459-8f4d-14138f088356-config-data\") pod \"77749f2c-ba94-4459-8f4d-14138f088356\" (UID: \"77749f2c-ba94-4459-8f4d-14138f088356\") " Feb 02 17:33:03 crc kubenswrapper[4858]: I0202 17:33:03.913799 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77749f2c-ba94-4459-8f4d-14138f088356-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "77749f2c-ba94-4459-8f4d-14138f088356" (UID: "77749f2c-ba94-4459-8f4d-14138f088356"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:33:03 crc kubenswrapper[4858]: I0202 17:33:03.914094 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77749f2c-ba94-4459-8f4d-14138f088356-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "77749f2c-ba94-4459-8f4d-14138f088356" (UID: "77749f2c-ba94-4459-8f4d-14138f088356"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:33:03 crc kubenswrapper[4858]: I0202 17:33:03.918471 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77749f2c-ba94-4459-8f4d-14138f088356-scripts" (OuterVolumeSpecName: "scripts") pod "77749f2c-ba94-4459-8f4d-14138f088356" (UID: "77749f2c-ba94-4459-8f4d-14138f088356"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:33:03 crc kubenswrapper[4858]: I0202 17:33:03.918844 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77749f2c-ba94-4459-8f4d-14138f088356-kube-api-access-ng86j" (OuterVolumeSpecName: "kube-api-access-ng86j") pod "77749f2c-ba94-4459-8f4d-14138f088356" (UID: "77749f2c-ba94-4459-8f4d-14138f088356"). InnerVolumeSpecName "kube-api-access-ng86j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:33:03 crc kubenswrapper[4858]: I0202 17:33:03.940370 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77749f2c-ba94-4459-8f4d-14138f088356-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "77749f2c-ba94-4459-8f4d-14138f088356" (UID: "77749f2c-ba94-4459-8f4d-14138f088356"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:33:03 crc kubenswrapper[4858]: I0202 17:33:03.982108 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77749f2c-ba94-4459-8f4d-14138f088356-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "77749f2c-ba94-4459-8f4d-14138f088356" (UID: "77749f2c-ba94-4459-8f4d-14138f088356"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.009127 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77749f2c-ba94-4459-8f4d-14138f088356-config-data" (OuterVolumeSpecName: "config-data") pod "77749f2c-ba94-4459-8f4d-14138f088356" (UID: "77749f2c-ba94-4459-8f4d-14138f088356"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.015582 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77749f2c-ba94-4459-8f4d-14138f088356-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.015613 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77749f2c-ba94-4459-8f4d-14138f088356-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.015622 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/77749f2c-ba94-4459-8f4d-14138f088356-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.015631 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77749f2c-ba94-4459-8f4d-14138f088356-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.015642 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77749f2c-ba94-4459-8f4d-14138f088356-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.015652 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ng86j\" (UniqueName: \"kubernetes.io/projected/77749f2c-ba94-4459-8f4d-14138f088356-kube-api-access-ng86j\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.015661 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/77749f2c-ba94-4459-8f4d-14138f088356-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.295339 4858 generic.go:334] "Generic (PLEG): container finished" podID="77749f2c-ba94-4459-8f4d-14138f088356" containerID="85ca06a3e77b83ef58f5635624514a131d64f8ea8aa0ec5fc2cb46ea0fed4e86" exitCode=0 Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.295387 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77749f2c-ba94-4459-8f4d-14138f088356","Type":"ContainerDied","Data":"85ca06a3e77b83ef58f5635624514a131d64f8ea8aa0ec5fc2cb46ea0fed4e86"} Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.295396 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.295420 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"77749f2c-ba94-4459-8f4d-14138f088356","Type":"ContainerDied","Data":"916fe6898ae11150bc24ae42d8742a693d163b1d37414a8dc58e077e10c799e1"} Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.295441 4858 scope.go:117] "RemoveContainer" containerID="c8027205a8f2f6dfc1726bd436d15e638a4d52b3f19a1c36dc3d60342f8df68e" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.327160 4858 scope.go:117] "RemoveContainer" containerID="eb40c99759963f61d2ac95460d7f10f173cc46fa102993a4974dfa900525e764" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.336688 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.344774 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.368521 4858 scope.go:117] "RemoveContainer" containerID="f9b72186f00b33de6ecae862e784c593cd6f029397a0ba272462869aa81fd7c2" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.370769 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:33:04 crc kubenswrapper[4858]: E0202 17:33:04.371233 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77749f2c-ba94-4459-8f4d-14138f088356" containerName="sg-core" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.371251 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="77749f2c-ba94-4459-8f4d-14138f088356" containerName="sg-core" Feb 02 17:33:04 crc kubenswrapper[4858]: E0202 17:33:04.371284 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77749f2c-ba94-4459-8f4d-14138f088356" containerName="ceilometer-notification-agent" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.371291 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="77749f2c-ba94-4459-8f4d-14138f088356" containerName="ceilometer-notification-agent" Feb 02 17:33:04 crc kubenswrapper[4858]: E0202 17:33:04.371307 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77749f2c-ba94-4459-8f4d-14138f088356" containerName="proxy-httpd" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.371313 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="77749f2c-ba94-4459-8f4d-14138f088356" containerName="proxy-httpd" Feb 02 17:33:04 crc kubenswrapper[4858]: E0202 17:33:04.371334 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77749f2c-ba94-4459-8f4d-14138f088356" containerName="ceilometer-central-agent" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.371341 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="77749f2c-ba94-4459-8f4d-14138f088356" containerName="ceilometer-central-agent" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.371537 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="77749f2c-ba94-4459-8f4d-14138f088356" containerName="ceilometer-central-agent" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.371560 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="77749f2c-ba94-4459-8f4d-14138f088356" containerName="proxy-httpd" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.371571 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="77749f2c-ba94-4459-8f4d-14138f088356" containerName="sg-core" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.371584 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="77749f2c-ba94-4459-8f4d-14138f088356" containerName="ceilometer-notification-agent" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.375563 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.378524 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.378603 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.386493 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.419786 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77749f2c-ba94-4459-8f4d-14138f088356" path="/var/lib/kubelet/pods/77749f2c-ba94-4459-8f4d-14138f088356/volumes" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.422716 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d77f2bc8-8598-45bb-8fe7-3c7778ef4a52-run-httpd\") pod \"ceilometer-0\" (UID: \"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52\") " pod="openstack/ceilometer-0" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.422779 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d77f2bc8-8598-45bb-8fe7-3c7778ef4a52-log-httpd\") pod \"ceilometer-0\" (UID: \"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52\") " pod="openstack/ceilometer-0" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.422806 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d77f2bc8-8598-45bb-8fe7-3c7778ef4a52-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52\") " pod="openstack/ceilometer-0" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.422998 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d77f2bc8-8598-45bb-8fe7-3c7778ef4a52-config-data\") pod \"ceilometer-0\" (UID: \"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52\") " pod="openstack/ceilometer-0" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.423039 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzbl2\" (UniqueName: \"kubernetes.io/projected/d77f2bc8-8598-45bb-8fe7-3c7778ef4a52-kube-api-access-fzbl2\") pod \"ceilometer-0\" (UID: \"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52\") " pod="openstack/ceilometer-0" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.423166 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d77f2bc8-8598-45bb-8fe7-3c7778ef4a52-scripts\") pod \"ceilometer-0\" (UID: \"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52\") " pod="openstack/ceilometer-0" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.423286 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d77f2bc8-8598-45bb-8fe7-3c7778ef4a52-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52\") " pod="openstack/ceilometer-0" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.455691 4858 scope.go:117] "RemoveContainer" containerID="85ca06a3e77b83ef58f5635624514a131d64f8ea8aa0ec5fc2cb46ea0fed4e86" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.473921 4858 scope.go:117] "RemoveContainer" containerID="c8027205a8f2f6dfc1726bd436d15e638a4d52b3f19a1c36dc3d60342f8df68e" Feb 02 17:33:04 crc kubenswrapper[4858]: E0202 17:33:04.474394 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8027205a8f2f6dfc1726bd436d15e638a4d52b3f19a1c36dc3d60342f8df68e\": container with ID starting with c8027205a8f2f6dfc1726bd436d15e638a4d52b3f19a1c36dc3d60342f8df68e not found: ID does not exist" containerID="c8027205a8f2f6dfc1726bd436d15e638a4d52b3f19a1c36dc3d60342f8df68e" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.474434 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8027205a8f2f6dfc1726bd436d15e638a4d52b3f19a1c36dc3d60342f8df68e"} err="failed to get container status \"c8027205a8f2f6dfc1726bd436d15e638a4d52b3f19a1c36dc3d60342f8df68e\": rpc error: code = NotFound desc = could not find container \"c8027205a8f2f6dfc1726bd436d15e638a4d52b3f19a1c36dc3d60342f8df68e\": container with ID starting with c8027205a8f2f6dfc1726bd436d15e638a4d52b3f19a1c36dc3d60342f8df68e not found: ID does not exist" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.474462 4858 scope.go:117] "RemoveContainer" containerID="eb40c99759963f61d2ac95460d7f10f173cc46fa102993a4974dfa900525e764" Feb 02 17:33:04 crc kubenswrapper[4858]: E0202 17:33:04.474786 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb40c99759963f61d2ac95460d7f10f173cc46fa102993a4974dfa900525e764\": container with ID starting with eb40c99759963f61d2ac95460d7f10f173cc46fa102993a4974dfa900525e764 not found: ID does not exist" containerID="eb40c99759963f61d2ac95460d7f10f173cc46fa102993a4974dfa900525e764" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.474814 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb40c99759963f61d2ac95460d7f10f173cc46fa102993a4974dfa900525e764"} err="failed to get container status \"eb40c99759963f61d2ac95460d7f10f173cc46fa102993a4974dfa900525e764\": rpc error: code = NotFound desc = could not find container \"eb40c99759963f61d2ac95460d7f10f173cc46fa102993a4974dfa900525e764\": container with ID starting with eb40c99759963f61d2ac95460d7f10f173cc46fa102993a4974dfa900525e764 not found: ID does not exist" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.474833 4858 scope.go:117] "RemoveContainer" containerID="f9b72186f00b33de6ecae862e784c593cd6f029397a0ba272462869aa81fd7c2" Feb 02 17:33:04 crc kubenswrapper[4858]: E0202 17:33:04.475218 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9b72186f00b33de6ecae862e784c593cd6f029397a0ba272462869aa81fd7c2\": container with ID starting with f9b72186f00b33de6ecae862e784c593cd6f029397a0ba272462869aa81fd7c2 not found: ID does not exist" containerID="f9b72186f00b33de6ecae862e784c593cd6f029397a0ba272462869aa81fd7c2" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.475246 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9b72186f00b33de6ecae862e784c593cd6f029397a0ba272462869aa81fd7c2"} err="failed to get container status \"f9b72186f00b33de6ecae862e784c593cd6f029397a0ba272462869aa81fd7c2\": rpc error: code = NotFound desc = could not find container \"f9b72186f00b33de6ecae862e784c593cd6f029397a0ba272462869aa81fd7c2\": container with ID starting with f9b72186f00b33de6ecae862e784c593cd6f029397a0ba272462869aa81fd7c2 not found: ID does not exist" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.475264 4858 scope.go:117] "RemoveContainer" containerID="85ca06a3e77b83ef58f5635624514a131d64f8ea8aa0ec5fc2cb46ea0fed4e86" Feb 02 17:33:04 crc kubenswrapper[4858]: E0202 17:33:04.475869 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85ca06a3e77b83ef58f5635624514a131d64f8ea8aa0ec5fc2cb46ea0fed4e86\": container with ID starting with 85ca06a3e77b83ef58f5635624514a131d64f8ea8aa0ec5fc2cb46ea0fed4e86 not found: ID does not exist" containerID="85ca06a3e77b83ef58f5635624514a131d64f8ea8aa0ec5fc2cb46ea0fed4e86" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.475894 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85ca06a3e77b83ef58f5635624514a131d64f8ea8aa0ec5fc2cb46ea0fed4e86"} err="failed to get container status \"85ca06a3e77b83ef58f5635624514a131d64f8ea8aa0ec5fc2cb46ea0fed4e86\": rpc error: code = NotFound desc = could not find container \"85ca06a3e77b83ef58f5635624514a131d64f8ea8aa0ec5fc2cb46ea0fed4e86\": container with ID starting with 85ca06a3e77b83ef58f5635624514a131d64f8ea8aa0ec5fc2cb46ea0fed4e86 not found: ID does not exist" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.524522 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d77f2bc8-8598-45bb-8fe7-3c7778ef4a52-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52\") " pod="openstack/ceilometer-0" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.524671 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d77f2bc8-8598-45bb-8fe7-3c7778ef4a52-config-data\") pod \"ceilometer-0\" (UID: \"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52\") " pod="openstack/ceilometer-0" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.524696 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzbl2\" (UniqueName: \"kubernetes.io/projected/d77f2bc8-8598-45bb-8fe7-3c7778ef4a52-kube-api-access-fzbl2\") pod \"ceilometer-0\" (UID: \"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52\") " pod="openstack/ceilometer-0" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.524779 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d77f2bc8-8598-45bb-8fe7-3c7778ef4a52-scripts\") pod \"ceilometer-0\" (UID: \"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52\") " pod="openstack/ceilometer-0" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.524844 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d77f2bc8-8598-45bb-8fe7-3c7778ef4a52-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52\") " pod="openstack/ceilometer-0" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.524902 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d77f2bc8-8598-45bb-8fe7-3c7778ef4a52-run-httpd\") pod \"ceilometer-0\" (UID: \"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52\") " pod="openstack/ceilometer-0" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.524999 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d77f2bc8-8598-45bb-8fe7-3c7778ef4a52-log-httpd\") pod \"ceilometer-0\" (UID: \"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52\") " pod="openstack/ceilometer-0" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.525672 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d77f2bc8-8598-45bb-8fe7-3c7778ef4a52-run-httpd\") pod \"ceilometer-0\" (UID: \"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52\") " pod="openstack/ceilometer-0" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.526562 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d77f2bc8-8598-45bb-8fe7-3c7778ef4a52-log-httpd\") pod \"ceilometer-0\" (UID: \"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52\") " pod="openstack/ceilometer-0" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.528599 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d77f2bc8-8598-45bb-8fe7-3c7778ef4a52-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52\") " pod="openstack/ceilometer-0" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.528760 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d77f2bc8-8598-45bb-8fe7-3c7778ef4a52-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52\") " pod="openstack/ceilometer-0" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.529621 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d77f2bc8-8598-45bb-8fe7-3c7778ef4a52-config-data\") pod \"ceilometer-0\" (UID: \"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52\") " pod="openstack/ceilometer-0" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.537679 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d77f2bc8-8598-45bb-8fe7-3c7778ef4a52-scripts\") pod \"ceilometer-0\" (UID: \"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52\") " pod="openstack/ceilometer-0" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.542076 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzbl2\" (UniqueName: \"kubernetes.io/projected/d77f2bc8-8598-45bb-8fe7-3c7778ef4a52-kube-api-access-fzbl2\") pod \"ceilometer-0\" (UID: \"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52\") " pod="openstack/ceilometer-0" Feb 02 17:33:04 crc kubenswrapper[4858]: I0202 17:33:04.744458 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 17:33:05 crc kubenswrapper[4858]: I0202 17:33:05.209503 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:33:05 crc kubenswrapper[4858]: W0202 17:33:05.224034 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd77f2bc8_8598_45bb_8fe7_3c7778ef4a52.slice/crio-571303422ec2c2453eb670207e09ff3335ab36eeb1be8362664190fc8b876a4f WatchSource:0}: Error finding container 571303422ec2c2453eb670207e09ff3335ab36eeb1be8362664190fc8b876a4f: Status 404 returned error can't find the container with id 571303422ec2c2453eb670207e09ff3335ab36eeb1be8362664190fc8b876a4f Feb 02 17:33:05 crc kubenswrapper[4858]: I0202 17:33:05.311427 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52","Type":"ContainerStarted","Data":"571303422ec2c2453eb670207e09ff3335ab36eeb1be8362664190fc8b876a4f"} Feb 02 17:33:06 crc kubenswrapper[4858]: I0202 17:33:06.344924 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52","Type":"ContainerStarted","Data":"0c1aea4b269713b6ce7ad23e0a6658e3ce681042586685fff76278573ee788e4"} Feb 02 17:33:07 crc kubenswrapper[4858]: I0202 17:33:07.353911 4858 generic.go:334] "Generic (PLEG): container finished" podID="f3a30ec7-1686-4aa1-b365-1d0516dda2eb" containerID="240802e54634b3f98bd4efe13b3e6f7f586deaf5aaadcb30f2f96a94d7ffd69d" exitCode=0 Feb 02 17:33:07 crc kubenswrapper[4858]: I0202 17:33:07.354127 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-66l2r" event={"ID":"f3a30ec7-1686-4aa1-b365-1d0516dda2eb","Type":"ContainerDied","Data":"240802e54634b3f98bd4efe13b3e6f7f586deaf5aaadcb30f2f96a94d7ffd69d"} Feb 02 17:33:07 crc kubenswrapper[4858]: I0202 17:33:07.357580 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52","Type":"ContainerStarted","Data":"90acaf6540bfdafac423cfc6cc11db1f8cf400193b6e5477d43c29cf9917c95b"} Feb 02 17:33:07 crc kubenswrapper[4858]: I0202 17:33:07.357628 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52","Type":"ContainerStarted","Data":"97771346e4fc80d670a0bd77ad8740a9697b27d308bd16389c48a87ff366843f"} Feb 02 17:33:07 crc kubenswrapper[4858]: I0202 17:33:07.968750 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 02 17:33:07 crc kubenswrapper[4858]: I0202 17:33:07.968833 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 02 17:33:07 crc kubenswrapper[4858]: I0202 17:33:07.979310 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 02 17:33:07 crc kubenswrapper[4858]: I0202 17:33:07.979351 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 02 17:33:08 crc kubenswrapper[4858]: I0202 17:33:08.019709 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 02 17:33:08 crc kubenswrapper[4858]: I0202 17:33:08.028796 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 02 17:33:08 crc kubenswrapper[4858]: I0202 17:33:08.031478 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 02 17:33:08 crc kubenswrapper[4858]: I0202 17:33:08.038212 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 02 17:33:08 crc kubenswrapper[4858]: I0202 17:33:08.366096 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 02 17:33:08 crc kubenswrapper[4858]: I0202 17:33:08.366138 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 02 17:33:08 crc kubenswrapper[4858]: I0202 17:33:08.366303 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 02 17:33:08 crc kubenswrapper[4858]: I0202 17:33:08.366318 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 02 17:33:08 crc kubenswrapper[4858]: I0202 17:33:08.728766 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-66l2r" Feb 02 17:33:08 crc kubenswrapper[4858]: I0202 17:33:08.814523 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3a30ec7-1686-4aa1-b365-1d0516dda2eb-config-data\") pod \"f3a30ec7-1686-4aa1-b365-1d0516dda2eb\" (UID: \"f3a30ec7-1686-4aa1-b365-1d0516dda2eb\") " Feb 02 17:33:08 crc kubenswrapper[4858]: I0202 17:33:08.814606 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3a30ec7-1686-4aa1-b365-1d0516dda2eb-combined-ca-bundle\") pod \"f3a30ec7-1686-4aa1-b365-1d0516dda2eb\" (UID: \"f3a30ec7-1686-4aa1-b365-1d0516dda2eb\") " Feb 02 17:33:08 crc kubenswrapper[4858]: I0202 17:33:08.814641 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgfsj\" (UniqueName: \"kubernetes.io/projected/f3a30ec7-1686-4aa1-b365-1d0516dda2eb-kube-api-access-fgfsj\") pod \"f3a30ec7-1686-4aa1-b365-1d0516dda2eb\" (UID: \"f3a30ec7-1686-4aa1-b365-1d0516dda2eb\") " Feb 02 17:33:08 crc kubenswrapper[4858]: I0202 17:33:08.814708 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3a30ec7-1686-4aa1-b365-1d0516dda2eb-scripts\") pod \"f3a30ec7-1686-4aa1-b365-1d0516dda2eb\" (UID: \"f3a30ec7-1686-4aa1-b365-1d0516dda2eb\") " Feb 02 17:33:08 crc kubenswrapper[4858]: I0202 17:33:08.823185 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3a30ec7-1686-4aa1-b365-1d0516dda2eb-kube-api-access-fgfsj" (OuterVolumeSpecName: "kube-api-access-fgfsj") pod "f3a30ec7-1686-4aa1-b365-1d0516dda2eb" (UID: "f3a30ec7-1686-4aa1-b365-1d0516dda2eb"). InnerVolumeSpecName "kube-api-access-fgfsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:33:08 crc kubenswrapper[4858]: I0202 17:33:08.823871 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3a30ec7-1686-4aa1-b365-1d0516dda2eb-scripts" (OuterVolumeSpecName: "scripts") pod "f3a30ec7-1686-4aa1-b365-1d0516dda2eb" (UID: "f3a30ec7-1686-4aa1-b365-1d0516dda2eb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:33:08 crc kubenswrapper[4858]: I0202 17:33:08.845098 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3a30ec7-1686-4aa1-b365-1d0516dda2eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f3a30ec7-1686-4aa1-b365-1d0516dda2eb" (UID: "f3a30ec7-1686-4aa1-b365-1d0516dda2eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:33:08 crc kubenswrapper[4858]: I0202 17:33:08.891088 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3a30ec7-1686-4aa1-b365-1d0516dda2eb-config-data" (OuterVolumeSpecName: "config-data") pod "f3a30ec7-1686-4aa1-b365-1d0516dda2eb" (UID: "f3a30ec7-1686-4aa1-b365-1d0516dda2eb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:33:08 crc kubenswrapper[4858]: I0202 17:33:08.916912 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3a30ec7-1686-4aa1-b365-1d0516dda2eb-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:08 crc kubenswrapper[4858]: I0202 17:33:08.916958 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3a30ec7-1686-4aa1-b365-1d0516dda2eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:08 crc kubenswrapper[4858]: I0202 17:33:08.916992 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgfsj\" (UniqueName: \"kubernetes.io/projected/f3a30ec7-1686-4aa1-b365-1d0516dda2eb-kube-api-access-fgfsj\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:08 crc kubenswrapper[4858]: I0202 17:33:08.917005 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3a30ec7-1686-4aa1-b365-1d0516dda2eb-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:09 crc kubenswrapper[4858]: I0202 17:33:09.396567 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-66l2r" event={"ID":"f3a30ec7-1686-4aa1-b365-1d0516dda2eb","Type":"ContainerDied","Data":"c7a08770a17aa5418d025306746148fa32150532477f1e9c75605422cf7b6c10"} Feb 02 17:33:09 crc kubenswrapper[4858]: I0202 17:33:09.396633 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7a08770a17aa5418d025306746148fa32150532477f1e9c75605422cf7b6c10" Feb 02 17:33:09 crc kubenswrapper[4858]: I0202 17:33:09.396592 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-66l2r" Feb 02 17:33:09 crc kubenswrapper[4858]: I0202 17:33:09.503026 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 17:33:09 crc kubenswrapper[4858]: E0202 17:33:09.503490 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3a30ec7-1686-4aa1-b365-1d0516dda2eb" containerName="nova-cell0-conductor-db-sync" Feb 02 17:33:09 crc kubenswrapper[4858]: I0202 17:33:09.503513 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3a30ec7-1686-4aa1-b365-1d0516dda2eb" containerName="nova-cell0-conductor-db-sync" Feb 02 17:33:09 crc kubenswrapper[4858]: I0202 17:33:09.503770 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3a30ec7-1686-4aa1-b365-1d0516dda2eb" containerName="nova-cell0-conductor-db-sync" Feb 02 17:33:09 crc kubenswrapper[4858]: I0202 17:33:09.504539 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 02 17:33:09 crc kubenswrapper[4858]: I0202 17:33:09.507429 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-8qp9j" Feb 02 17:33:09 crc kubenswrapper[4858]: I0202 17:33:09.507669 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 02 17:33:09 crc kubenswrapper[4858]: I0202 17:33:09.542038 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 17:33:09 crc kubenswrapper[4858]: I0202 17:33:09.629727 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf4ff043-2e61-44ec-a4ca-b93c524edf89-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"cf4ff043-2e61-44ec-a4ca-b93c524edf89\") " pod="openstack/nova-cell0-conductor-0" Feb 02 17:33:09 crc kubenswrapper[4858]: I0202 17:33:09.629837 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf4ff043-2e61-44ec-a4ca-b93c524edf89-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"cf4ff043-2e61-44ec-a4ca-b93c524edf89\") " pod="openstack/nova-cell0-conductor-0" Feb 02 17:33:09 crc kubenswrapper[4858]: I0202 17:33:09.630047 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b64j\" (UniqueName: \"kubernetes.io/projected/cf4ff043-2e61-44ec-a4ca-b93c524edf89-kube-api-access-4b64j\") pod \"nova-cell0-conductor-0\" (UID: \"cf4ff043-2e61-44ec-a4ca-b93c524edf89\") " pod="openstack/nova-cell0-conductor-0" Feb 02 17:33:09 crc kubenswrapper[4858]: I0202 17:33:09.731284 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf4ff043-2e61-44ec-a4ca-b93c524edf89-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"cf4ff043-2e61-44ec-a4ca-b93c524edf89\") " pod="openstack/nova-cell0-conductor-0" Feb 02 17:33:09 crc kubenswrapper[4858]: I0202 17:33:09.731333 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf4ff043-2e61-44ec-a4ca-b93c524edf89-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"cf4ff043-2e61-44ec-a4ca-b93c524edf89\") " pod="openstack/nova-cell0-conductor-0" Feb 02 17:33:09 crc kubenswrapper[4858]: I0202 17:33:09.731424 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b64j\" (UniqueName: \"kubernetes.io/projected/cf4ff043-2e61-44ec-a4ca-b93c524edf89-kube-api-access-4b64j\") pod \"nova-cell0-conductor-0\" (UID: \"cf4ff043-2e61-44ec-a4ca-b93c524edf89\") " pod="openstack/nova-cell0-conductor-0" Feb 02 17:33:09 crc kubenswrapper[4858]: I0202 17:33:09.739660 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf4ff043-2e61-44ec-a4ca-b93c524edf89-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"cf4ff043-2e61-44ec-a4ca-b93c524edf89\") " pod="openstack/nova-cell0-conductor-0" Feb 02 17:33:09 crc kubenswrapper[4858]: I0202 17:33:09.747688 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf4ff043-2e61-44ec-a4ca-b93c524edf89-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"cf4ff043-2e61-44ec-a4ca-b93c524edf89\") " pod="openstack/nova-cell0-conductor-0" Feb 02 17:33:09 crc kubenswrapper[4858]: I0202 17:33:09.748663 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b64j\" (UniqueName: \"kubernetes.io/projected/cf4ff043-2e61-44ec-a4ca-b93c524edf89-kube-api-access-4b64j\") pod \"nova-cell0-conductor-0\" (UID: \"cf4ff043-2e61-44ec-a4ca-b93c524edf89\") " pod="openstack/nova-cell0-conductor-0" Feb 02 17:33:09 crc kubenswrapper[4858]: I0202 17:33:09.829052 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 02 17:33:10 crc kubenswrapper[4858]: I0202 17:33:10.419593 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 17:33:10 crc kubenswrapper[4858]: I0202 17:33:10.419883 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 17:33:10 crc kubenswrapper[4858]: I0202 17:33:10.419741 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 17:33:10 crc kubenswrapper[4858]: I0202 17:33:10.419915 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 17:33:10 crc kubenswrapper[4858]: I0202 17:33:10.420922 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 17:33:10 crc kubenswrapper[4858]: I0202 17:33:10.420951 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52","Type":"ContainerStarted","Data":"b76902a9e36bdfcbbfa1c43d2289f90a0c7899247a5a2af466c459d8188e9b5e"} Feb 02 17:33:10 crc kubenswrapper[4858]: W0202 17:33:10.459660 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf4ff043_2e61_44ec_a4ca_b93c524edf89.slice/crio-1ccf218a2b7b821aa707ce053a5f8140443dffee3688eb7a87987ce7d04008b0 WatchSource:0}: Error finding container 1ccf218a2b7b821aa707ce053a5f8140443dffee3688eb7a87987ce7d04008b0: Status 404 returned error can't find the container with id 1ccf218a2b7b821aa707ce053a5f8140443dffee3688eb7a87987ce7d04008b0 Feb 02 17:33:10 crc kubenswrapper[4858]: I0202 17:33:10.468351 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 17:33:10 crc kubenswrapper[4858]: I0202 17:33:10.475232 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.523773984 podStartE2EDuration="6.475211844s" podCreationTimestamp="2026-02-02 17:33:04 +0000 UTC" firstStartedPulling="2026-02-02 17:33:05.227496666 +0000 UTC m=+1086.379911931" lastFinishedPulling="2026-02-02 17:33:09.178934526 +0000 UTC m=+1090.331349791" observedRunningTime="2026-02-02 17:33:10.45760482 +0000 UTC m=+1091.610020095" watchObservedRunningTime="2026-02-02 17:33:10.475211844 +0000 UTC m=+1091.627627109" Feb 02 17:33:10 crc kubenswrapper[4858]: I0202 17:33:10.684270 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 02 17:33:10 crc kubenswrapper[4858]: I0202 17:33:10.766648 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 02 17:33:11 crc kubenswrapper[4858]: I0202 17:33:11.126537 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 02 17:33:11 crc kubenswrapper[4858]: I0202 17:33:11.318856 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 02 17:33:11 crc kubenswrapper[4858]: I0202 17:33:11.443108 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"cf4ff043-2e61-44ec-a4ca-b93c524edf89","Type":"ContainerStarted","Data":"f0a867efe062e79bce40a44c376e24059726d7e75f6f4844017bfaf950c914a1"} Feb 02 17:33:11 crc kubenswrapper[4858]: I0202 17:33:11.443148 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"cf4ff043-2e61-44ec-a4ca-b93c524edf89","Type":"ContainerStarted","Data":"1ccf218a2b7b821aa707ce053a5f8140443dffee3688eb7a87987ce7d04008b0"} Feb 02 17:33:11 crc kubenswrapper[4858]: I0202 17:33:11.468746 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.468726865 podStartE2EDuration="2.468726865s" podCreationTimestamp="2026-02-02 17:33:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:33:11.461888319 +0000 UTC m=+1092.614303584" watchObservedRunningTime="2026-02-02 17:33:11.468726865 +0000 UTC m=+1092.621142130" Feb 02 17:33:11 crc kubenswrapper[4858]: I0202 17:33:11.959746 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:33:12 crc kubenswrapper[4858]: I0202 17:33:12.450846 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d77f2bc8-8598-45bb-8fe7-3c7778ef4a52" containerName="ceilometer-central-agent" containerID="cri-o://0c1aea4b269713b6ce7ad23e0a6658e3ce681042586685fff76278573ee788e4" gracePeriod=30 Feb 02 17:33:12 crc kubenswrapper[4858]: I0202 17:33:12.450997 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d77f2bc8-8598-45bb-8fe7-3c7778ef4a52" containerName="proxy-httpd" containerID="cri-o://b76902a9e36bdfcbbfa1c43d2289f90a0c7899247a5a2af466c459d8188e9b5e" gracePeriod=30 Feb 02 17:33:12 crc kubenswrapper[4858]: I0202 17:33:12.451038 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d77f2bc8-8598-45bb-8fe7-3c7778ef4a52" containerName="sg-core" containerID="cri-o://90acaf6540bfdafac423cfc6cc11db1f8cf400193b6e5477d43c29cf9917c95b" gracePeriod=30 Feb 02 17:33:12 crc kubenswrapper[4858]: I0202 17:33:12.451073 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d77f2bc8-8598-45bb-8fe7-3c7778ef4a52" containerName="ceilometer-notification-agent" containerID="cri-o://97771346e4fc80d670a0bd77ad8740a9697b27d308bd16389c48a87ff366843f" gracePeriod=30 Feb 02 17:33:12 crc kubenswrapper[4858]: I0202 17:33:12.452108 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 02 17:33:13 crc kubenswrapper[4858]: I0202 17:33:13.464455 4858 generic.go:334] "Generic (PLEG): container finished" podID="d77f2bc8-8598-45bb-8fe7-3c7778ef4a52" containerID="b76902a9e36bdfcbbfa1c43d2289f90a0c7899247a5a2af466c459d8188e9b5e" exitCode=0 Feb 02 17:33:13 crc kubenswrapper[4858]: I0202 17:33:13.464491 4858 generic.go:334] "Generic (PLEG): container finished" podID="d77f2bc8-8598-45bb-8fe7-3c7778ef4a52" containerID="90acaf6540bfdafac423cfc6cc11db1f8cf400193b6e5477d43c29cf9917c95b" exitCode=2 Feb 02 17:33:13 crc kubenswrapper[4858]: I0202 17:33:13.464500 4858 generic.go:334] "Generic (PLEG): container finished" podID="d77f2bc8-8598-45bb-8fe7-3c7778ef4a52" containerID="97771346e4fc80d670a0bd77ad8740a9697b27d308bd16389c48a87ff366843f" exitCode=0 Feb 02 17:33:13 crc kubenswrapper[4858]: I0202 17:33:13.465272 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52","Type":"ContainerDied","Data":"b76902a9e36bdfcbbfa1c43d2289f90a0c7899247a5a2af466c459d8188e9b5e"} Feb 02 17:33:13 crc kubenswrapper[4858]: I0202 17:33:13.465576 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52","Type":"ContainerDied","Data":"90acaf6540bfdafac423cfc6cc11db1f8cf400193b6e5477d43c29cf9917c95b"} Feb 02 17:33:13 crc kubenswrapper[4858]: I0202 17:33:13.465588 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52","Type":"ContainerDied","Data":"97771346e4fc80d670a0bd77ad8740a9697b27d308bd16389c48a87ff366843f"} Feb 02 17:33:14 crc kubenswrapper[4858]: I0202 17:33:14.478620 4858 generic.go:334] "Generic (PLEG): container finished" podID="d77f2bc8-8598-45bb-8fe7-3c7778ef4a52" containerID="0c1aea4b269713b6ce7ad23e0a6658e3ce681042586685fff76278573ee788e4" exitCode=0 Feb 02 17:33:14 crc kubenswrapper[4858]: I0202 17:33:14.478673 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52","Type":"ContainerDied","Data":"0c1aea4b269713b6ce7ad23e0a6658e3ce681042586685fff76278573ee788e4"} Feb 02 17:33:14 crc kubenswrapper[4858]: I0202 17:33:14.937765 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.031393 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d77f2bc8-8598-45bb-8fe7-3c7778ef4a52-scripts\") pod \"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52\" (UID: \"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52\") " Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.031455 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d77f2bc8-8598-45bb-8fe7-3c7778ef4a52-combined-ca-bundle\") pod \"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52\" (UID: \"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52\") " Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.031493 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d77f2bc8-8598-45bb-8fe7-3c7778ef4a52-log-httpd\") pod \"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52\" (UID: \"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52\") " Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.031573 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzbl2\" (UniqueName: \"kubernetes.io/projected/d77f2bc8-8598-45bb-8fe7-3c7778ef4a52-kube-api-access-fzbl2\") pod \"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52\" (UID: \"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52\") " Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.031609 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d77f2bc8-8598-45bb-8fe7-3c7778ef4a52-sg-core-conf-yaml\") pod \"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52\" (UID: \"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52\") " Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.031674 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d77f2bc8-8598-45bb-8fe7-3c7778ef4a52-run-httpd\") pod \"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52\" (UID: \"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52\") " Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.031743 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d77f2bc8-8598-45bb-8fe7-3c7778ef4a52-config-data\") pod \"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52\" (UID: \"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52\") " Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.032151 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d77f2bc8-8598-45bb-8fe7-3c7778ef4a52-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d77f2bc8-8598-45bb-8fe7-3c7778ef4a52" (UID: "d77f2bc8-8598-45bb-8fe7-3c7778ef4a52"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.032250 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d77f2bc8-8598-45bb-8fe7-3c7778ef4a52-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.032310 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d77f2bc8-8598-45bb-8fe7-3c7778ef4a52-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d77f2bc8-8598-45bb-8fe7-3c7778ef4a52" (UID: "d77f2bc8-8598-45bb-8fe7-3c7778ef4a52"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.036938 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d77f2bc8-8598-45bb-8fe7-3c7778ef4a52-kube-api-access-fzbl2" (OuterVolumeSpecName: "kube-api-access-fzbl2") pod "d77f2bc8-8598-45bb-8fe7-3c7778ef4a52" (UID: "d77f2bc8-8598-45bb-8fe7-3c7778ef4a52"). InnerVolumeSpecName "kube-api-access-fzbl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.041219 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d77f2bc8-8598-45bb-8fe7-3c7778ef4a52-scripts" (OuterVolumeSpecName: "scripts") pod "d77f2bc8-8598-45bb-8fe7-3c7778ef4a52" (UID: "d77f2bc8-8598-45bb-8fe7-3c7778ef4a52"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.059119 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d77f2bc8-8598-45bb-8fe7-3c7778ef4a52-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d77f2bc8-8598-45bb-8fe7-3c7778ef4a52" (UID: "d77f2bc8-8598-45bb-8fe7-3c7778ef4a52"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.125110 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d77f2bc8-8598-45bb-8fe7-3c7778ef4a52-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d77f2bc8-8598-45bb-8fe7-3c7778ef4a52" (UID: "d77f2bc8-8598-45bb-8fe7-3c7778ef4a52"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.128059 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d77f2bc8-8598-45bb-8fe7-3c7778ef4a52-config-data" (OuterVolumeSpecName: "config-data") pod "d77f2bc8-8598-45bb-8fe7-3c7778ef4a52" (UID: "d77f2bc8-8598-45bb-8fe7-3c7778ef4a52"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.134095 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d77f2bc8-8598-45bb-8fe7-3c7778ef4a52-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.134125 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d77f2bc8-8598-45bb-8fe7-3c7778ef4a52-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.134139 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d77f2bc8-8598-45bb-8fe7-3c7778ef4a52-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.134149 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzbl2\" (UniqueName: \"kubernetes.io/projected/d77f2bc8-8598-45bb-8fe7-3c7778ef4a52-kube-api-access-fzbl2\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.134159 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d77f2bc8-8598-45bb-8fe7-3c7778ef4a52-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.134167 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d77f2bc8-8598-45bb-8fe7-3c7778ef4a52-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.493123 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d77f2bc8-8598-45bb-8fe7-3c7778ef4a52","Type":"ContainerDied","Data":"571303422ec2c2453eb670207e09ff3335ab36eeb1be8362664190fc8b876a4f"} Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.493184 4858 scope.go:117] "RemoveContainer" containerID="b76902a9e36bdfcbbfa1c43d2289f90a0c7899247a5a2af466c459d8188e9b5e" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.493189 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.518604 4858 scope.go:117] "RemoveContainer" containerID="90acaf6540bfdafac423cfc6cc11db1f8cf400193b6e5477d43c29cf9917c95b" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.533008 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.546858 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.557249 4858 scope.go:117] "RemoveContainer" containerID="97771346e4fc80d670a0bd77ad8740a9697b27d308bd16389c48a87ff366843f" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.560130 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:33:15 crc kubenswrapper[4858]: E0202 17:33:15.560489 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d77f2bc8-8598-45bb-8fe7-3c7778ef4a52" containerName="proxy-httpd" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.560506 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d77f2bc8-8598-45bb-8fe7-3c7778ef4a52" containerName="proxy-httpd" Feb 02 17:33:15 crc kubenswrapper[4858]: E0202 17:33:15.560521 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d77f2bc8-8598-45bb-8fe7-3c7778ef4a52" containerName="ceilometer-central-agent" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.560528 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d77f2bc8-8598-45bb-8fe7-3c7778ef4a52" containerName="ceilometer-central-agent" Feb 02 17:33:15 crc kubenswrapper[4858]: E0202 17:33:15.560540 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d77f2bc8-8598-45bb-8fe7-3c7778ef4a52" containerName="ceilometer-notification-agent" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.560546 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d77f2bc8-8598-45bb-8fe7-3c7778ef4a52" containerName="ceilometer-notification-agent" Feb 02 17:33:15 crc kubenswrapper[4858]: E0202 17:33:15.560558 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d77f2bc8-8598-45bb-8fe7-3c7778ef4a52" containerName="sg-core" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.560564 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d77f2bc8-8598-45bb-8fe7-3c7778ef4a52" containerName="sg-core" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.560741 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d77f2bc8-8598-45bb-8fe7-3c7778ef4a52" containerName="ceilometer-central-agent" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.560756 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d77f2bc8-8598-45bb-8fe7-3c7778ef4a52" containerName="proxy-httpd" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.560778 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d77f2bc8-8598-45bb-8fe7-3c7778ef4a52" containerName="sg-core" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.560792 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d77f2bc8-8598-45bb-8fe7-3c7778ef4a52" containerName="ceilometer-notification-agent" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.562301 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.564644 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.565017 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.574622 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.598282 4858 scope.go:117] "RemoveContainer" containerID="0c1aea4b269713b6ce7ad23e0a6658e3ce681042586685fff76278573ee788e4" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.642686 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab3475a4-dc06-489e-a4de-ac9a204c5248-run-httpd\") pod \"ceilometer-0\" (UID: \"ab3475a4-dc06-489e-a4de-ac9a204c5248\") " pod="openstack/ceilometer-0" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.642737 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab3475a4-dc06-489e-a4de-ac9a204c5248-config-data\") pod \"ceilometer-0\" (UID: \"ab3475a4-dc06-489e-a4de-ac9a204c5248\") " pod="openstack/ceilometer-0" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.642759 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfhsv\" (UniqueName: \"kubernetes.io/projected/ab3475a4-dc06-489e-a4de-ac9a204c5248-kube-api-access-tfhsv\") pod \"ceilometer-0\" (UID: \"ab3475a4-dc06-489e-a4de-ac9a204c5248\") " pod="openstack/ceilometer-0" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.642790 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab3475a4-dc06-489e-a4de-ac9a204c5248-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ab3475a4-dc06-489e-a4de-ac9a204c5248\") " pod="openstack/ceilometer-0" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.642819 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab3475a4-dc06-489e-a4de-ac9a204c5248-scripts\") pod \"ceilometer-0\" (UID: \"ab3475a4-dc06-489e-a4de-ac9a204c5248\") " pod="openstack/ceilometer-0" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.642892 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab3475a4-dc06-489e-a4de-ac9a204c5248-log-httpd\") pod \"ceilometer-0\" (UID: \"ab3475a4-dc06-489e-a4de-ac9a204c5248\") " pod="openstack/ceilometer-0" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.642916 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab3475a4-dc06-489e-a4de-ac9a204c5248-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ab3475a4-dc06-489e-a4de-ac9a204c5248\") " pod="openstack/ceilometer-0" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.744735 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab3475a4-dc06-489e-a4de-ac9a204c5248-run-httpd\") pod \"ceilometer-0\" (UID: \"ab3475a4-dc06-489e-a4de-ac9a204c5248\") " pod="openstack/ceilometer-0" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.744788 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab3475a4-dc06-489e-a4de-ac9a204c5248-config-data\") pod \"ceilometer-0\" (UID: \"ab3475a4-dc06-489e-a4de-ac9a204c5248\") " pod="openstack/ceilometer-0" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.744813 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfhsv\" (UniqueName: \"kubernetes.io/projected/ab3475a4-dc06-489e-a4de-ac9a204c5248-kube-api-access-tfhsv\") pod \"ceilometer-0\" (UID: \"ab3475a4-dc06-489e-a4de-ac9a204c5248\") " pod="openstack/ceilometer-0" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.744843 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab3475a4-dc06-489e-a4de-ac9a204c5248-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ab3475a4-dc06-489e-a4de-ac9a204c5248\") " pod="openstack/ceilometer-0" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.744870 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab3475a4-dc06-489e-a4de-ac9a204c5248-scripts\") pod \"ceilometer-0\" (UID: \"ab3475a4-dc06-489e-a4de-ac9a204c5248\") " pod="openstack/ceilometer-0" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.744908 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab3475a4-dc06-489e-a4de-ac9a204c5248-log-httpd\") pod \"ceilometer-0\" (UID: \"ab3475a4-dc06-489e-a4de-ac9a204c5248\") " pod="openstack/ceilometer-0" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.744929 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab3475a4-dc06-489e-a4de-ac9a204c5248-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ab3475a4-dc06-489e-a4de-ac9a204c5248\") " pod="openstack/ceilometer-0" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.747436 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab3475a4-dc06-489e-a4de-ac9a204c5248-run-httpd\") pod \"ceilometer-0\" (UID: \"ab3475a4-dc06-489e-a4de-ac9a204c5248\") " pod="openstack/ceilometer-0" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.747756 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab3475a4-dc06-489e-a4de-ac9a204c5248-log-httpd\") pod \"ceilometer-0\" (UID: \"ab3475a4-dc06-489e-a4de-ac9a204c5248\") " pod="openstack/ceilometer-0" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.751222 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab3475a4-dc06-489e-a4de-ac9a204c5248-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ab3475a4-dc06-489e-a4de-ac9a204c5248\") " pod="openstack/ceilometer-0" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.752233 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab3475a4-dc06-489e-a4de-ac9a204c5248-scripts\") pod \"ceilometer-0\" (UID: \"ab3475a4-dc06-489e-a4de-ac9a204c5248\") " pod="openstack/ceilometer-0" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.752496 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab3475a4-dc06-489e-a4de-ac9a204c5248-config-data\") pod \"ceilometer-0\" (UID: \"ab3475a4-dc06-489e-a4de-ac9a204c5248\") " pod="openstack/ceilometer-0" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.766090 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab3475a4-dc06-489e-a4de-ac9a204c5248-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ab3475a4-dc06-489e-a4de-ac9a204c5248\") " pod="openstack/ceilometer-0" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.778737 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfhsv\" (UniqueName: \"kubernetes.io/projected/ab3475a4-dc06-489e-a4de-ac9a204c5248-kube-api-access-tfhsv\") pod \"ceilometer-0\" (UID: \"ab3475a4-dc06-489e-a4de-ac9a204c5248\") " pod="openstack/ceilometer-0" Feb 02 17:33:15 crc kubenswrapper[4858]: I0202 17:33:15.892478 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 17:33:16 crc kubenswrapper[4858]: I0202 17:33:16.415479 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d77f2bc8-8598-45bb-8fe7-3c7778ef4a52" path="/var/lib/kubelet/pods/d77f2bc8-8598-45bb-8fe7-3c7778ef4a52/volumes" Feb 02 17:33:16 crc kubenswrapper[4858]: I0202 17:33:16.442383 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:33:16 crc kubenswrapper[4858]: W0202 17:33:16.444794 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab3475a4_dc06_489e_a4de_ac9a204c5248.slice/crio-a5c2cf06012630c4d7e7fd6390cb7a4f45205b88091dc7a6e46768d89fd6b9c0 WatchSource:0}: Error finding container a5c2cf06012630c4d7e7fd6390cb7a4f45205b88091dc7a6e46768d89fd6b9c0: Status 404 returned error can't find the container with id a5c2cf06012630c4d7e7fd6390cb7a4f45205b88091dc7a6e46768d89fd6b9c0 Feb 02 17:33:16 crc kubenswrapper[4858]: I0202 17:33:16.503648 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab3475a4-dc06-489e-a4de-ac9a204c5248","Type":"ContainerStarted","Data":"a5c2cf06012630c4d7e7fd6390cb7a4f45205b88091dc7a6e46768d89fd6b9c0"} Feb 02 17:33:17 crc kubenswrapper[4858]: I0202 17:33:17.515230 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab3475a4-dc06-489e-a4de-ac9a204c5248","Type":"ContainerStarted","Data":"d2b6935a1d58e12c2f15504d10d99db8f0c2e950bd30bf2914e1bc1b523a4d0a"} Feb 02 17:33:18 crc kubenswrapper[4858]: I0202 17:33:18.525323 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab3475a4-dc06-489e-a4de-ac9a204c5248","Type":"ContainerStarted","Data":"e6f2d7175b77045cd25d7910599c3ee2e43313b2f29b1d73338c066049051c4e"} Feb 02 17:33:18 crc kubenswrapper[4858]: I0202 17:33:18.525669 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab3475a4-dc06-489e-a4de-ac9a204c5248","Type":"ContainerStarted","Data":"c842b3dd2c49df6a91cd4742c819516c4ffc305aa4314dead79290123b2b39ee"} Feb 02 17:33:19 crc kubenswrapper[4858]: I0202 17:33:19.861958 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.462246 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-8ckkf"] Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.463781 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8ckkf" Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.467423 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.467439 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.474238 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-8ckkf"] Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.543339 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6jqp\" (UniqueName: \"kubernetes.io/projected/449436cd-88ef-480a-9905-8b120f723f8f-kube-api-access-g6jqp\") pod \"nova-cell0-cell-mapping-8ckkf\" (UID: \"449436cd-88ef-480a-9905-8b120f723f8f\") " pod="openstack/nova-cell0-cell-mapping-8ckkf" Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.543855 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/449436cd-88ef-480a-9905-8b120f723f8f-config-data\") pod \"nova-cell0-cell-mapping-8ckkf\" (UID: \"449436cd-88ef-480a-9905-8b120f723f8f\") " pod="openstack/nova-cell0-cell-mapping-8ckkf" Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.544018 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/449436cd-88ef-480a-9905-8b120f723f8f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8ckkf\" (UID: \"449436cd-88ef-480a-9905-8b120f723f8f\") " pod="openstack/nova-cell0-cell-mapping-8ckkf" Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.544064 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/449436cd-88ef-480a-9905-8b120f723f8f-scripts\") pod \"nova-cell0-cell-mapping-8ckkf\" (UID: \"449436cd-88ef-480a-9905-8b120f723f8f\") " pod="openstack/nova-cell0-cell-mapping-8ckkf" Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.655860 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/449436cd-88ef-480a-9905-8b120f723f8f-scripts\") pod \"nova-cell0-cell-mapping-8ckkf\" (UID: \"449436cd-88ef-480a-9905-8b120f723f8f\") " pod="openstack/nova-cell0-cell-mapping-8ckkf" Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.655967 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6jqp\" (UniqueName: \"kubernetes.io/projected/449436cd-88ef-480a-9905-8b120f723f8f-kube-api-access-g6jqp\") pod \"nova-cell0-cell-mapping-8ckkf\" (UID: \"449436cd-88ef-480a-9905-8b120f723f8f\") " pod="openstack/nova-cell0-cell-mapping-8ckkf" Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.656055 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/449436cd-88ef-480a-9905-8b120f723f8f-config-data\") pod \"nova-cell0-cell-mapping-8ckkf\" (UID: \"449436cd-88ef-480a-9905-8b120f723f8f\") " pod="openstack/nova-cell0-cell-mapping-8ckkf" Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.656107 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/449436cd-88ef-480a-9905-8b120f723f8f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8ckkf\" (UID: \"449436cd-88ef-480a-9905-8b120f723f8f\") " pod="openstack/nova-cell0-cell-mapping-8ckkf" Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.679944 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/449436cd-88ef-480a-9905-8b120f723f8f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8ckkf\" (UID: \"449436cd-88ef-480a-9905-8b120f723f8f\") " pod="openstack/nova-cell0-cell-mapping-8ckkf" Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.703621 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/449436cd-88ef-480a-9905-8b120f723f8f-scripts\") pod \"nova-cell0-cell-mapping-8ckkf\" (UID: \"449436cd-88ef-480a-9905-8b120f723f8f\") " pod="openstack/nova-cell0-cell-mapping-8ckkf" Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.715661 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/449436cd-88ef-480a-9905-8b120f723f8f-config-data\") pod \"nova-cell0-cell-mapping-8ckkf\" (UID: \"449436cd-88ef-480a-9905-8b120f723f8f\") " pod="openstack/nova-cell0-cell-mapping-8ckkf" Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.743580 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6jqp\" (UniqueName: \"kubernetes.io/projected/449436cd-88ef-480a-9905-8b120f723f8f-kube-api-access-g6jqp\") pod \"nova-cell0-cell-mapping-8ckkf\" (UID: \"449436cd-88ef-480a-9905-8b120f723f8f\") " pod="openstack/nova-cell0-cell-mapping-8ckkf" Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.751555 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.752991 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.772377 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.812048 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8ckkf" Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.826062 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.827822 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.835846 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.870801 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.871886 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d767725-79eb-441b-9042-54ce10f9aa3b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9d767725-79eb-441b-9042-54ce10f9aa3b\") " pod="openstack/nova-api-0" Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.871934 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa9cda80-8059-476f-a7f3-710bb907548f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa9cda80-8059-476f-a7f3-710bb907548f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.871991 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d767725-79eb-441b-9042-54ce10f9aa3b-config-data\") pod \"nova-api-0\" (UID: \"9d767725-79eb-441b-9042-54ce10f9aa3b\") " pod="openstack/nova-api-0" Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.872030 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkmjc\" (UniqueName: \"kubernetes.io/projected/9d767725-79eb-441b-9042-54ce10f9aa3b-kube-api-access-pkmjc\") pod \"nova-api-0\" (UID: \"9d767725-79eb-441b-9042-54ce10f9aa3b\") " pod="openstack/nova-api-0" Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.872076 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d767725-79eb-441b-9042-54ce10f9aa3b-logs\") pod \"nova-api-0\" (UID: \"9d767725-79eb-441b-9042-54ce10f9aa3b\") " pod="openstack/nova-api-0" Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.872107 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5kx6\" (UniqueName: \"kubernetes.io/projected/aa9cda80-8059-476f-a7f3-710bb907548f-kube-api-access-h5kx6\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa9cda80-8059-476f-a7f3-710bb907548f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.872128 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa9cda80-8059-476f-a7f3-710bb907548f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa9cda80-8059-476f-a7f3-710bb907548f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.921519 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.976204 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d767725-79eb-441b-9042-54ce10f9aa3b-logs\") pod \"nova-api-0\" (UID: \"9d767725-79eb-441b-9042-54ce10f9aa3b\") " pod="openstack/nova-api-0" Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.976291 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5kx6\" (UniqueName: \"kubernetes.io/projected/aa9cda80-8059-476f-a7f3-710bb907548f-kube-api-access-h5kx6\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa9cda80-8059-476f-a7f3-710bb907548f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.976329 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa9cda80-8059-476f-a7f3-710bb907548f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa9cda80-8059-476f-a7f3-710bb907548f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.976386 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d767725-79eb-441b-9042-54ce10f9aa3b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9d767725-79eb-441b-9042-54ce10f9aa3b\") " pod="openstack/nova-api-0" Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.976430 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa9cda80-8059-476f-a7f3-710bb907548f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa9cda80-8059-476f-a7f3-710bb907548f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.976475 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d767725-79eb-441b-9042-54ce10f9aa3b-config-data\") pod \"nova-api-0\" (UID: \"9d767725-79eb-441b-9042-54ce10f9aa3b\") " pod="openstack/nova-api-0" Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.976525 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkmjc\" (UniqueName: \"kubernetes.io/projected/9d767725-79eb-441b-9042-54ce10f9aa3b-kube-api-access-pkmjc\") pod \"nova-api-0\" (UID: \"9d767725-79eb-441b-9042-54ce10f9aa3b\") " pod="openstack/nova-api-0" Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.977483 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d767725-79eb-441b-9042-54ce10f9aa3b-logs\") pod \"nova-api-0\" (UID: \"9d767725-79eb-441b-9042-54ce10f9aa3b\") " pod="openstack/nova-api-0" Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.985496 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.987171 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.996542 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 02 17:33:20 crc kubenswrapper[4858]: I0202 17:33:20.996729 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa9cda80-8059-476f-a7f3-710bb907548f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa9cda80-8059-476f-a7f3-710bb907548f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.009648 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d767725-79eb-441b-9042-54ce10f9aa3b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9d767725-79eb-441b-9042-54ce10f9aa3b\") " pod="openstack/nova-api-0" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.037341 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d767725-79eb-441b-9042-54ce10f9aa3b-config-data\") pod \"nova-api-0\" (UID: \"9d767725-79eb-441b-9042-54ce10f9aa3b\") " pod="openstack/nova-api-0" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.105836 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5kx6\" (UniqueName: \"kubernetes.io/projected/aa9cda80-8059-476f-a7f3-710bb907548f-kube-api-access-h5kx6\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa9cda80-8059-476f-a7f3-710bb907548f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.107042 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa9cda80-8059-476f-a7f3-710bb907548f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa9cda80-8059-476f-a7f3-710bb907548f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.108646 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkmjc\" (UniqueName: \"kubernetes.io/projected/9d767725-79eb-441b-9042-54ce10f9aa3b-kube-api-access-pkmjc\") pod \"nova-api-0\" (UID: \"9d767725-79eb-441b-9042-54ce10f9aa3b\") " pod="openstack/nova-api-0" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.125655 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlr4l\" (UniqueName: \"kubernetes.io/projected/b7433df4-440b-4a0a-ad8e-f0bdece26bc3-kube-api-access-vlr4l\") pod \"nova-scheduler-0\" (UID: \"b7433df4-440b-4a0a-ad8e-f0bdece26bc3\") " pod="openstack/nova-scheduler-0" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.125831 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7433df4-440b-4a0a-ad8e-f0bdece26bc3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b7433df4-440b-4a0a-ad8e-f0bdece26bc3\") " pod="openstack/nova-scheduler-0" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.125892 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7433df4-440b-4a0a-ad8e-f0bdece26bc3-config-data\") pod \"nova-scheduler-0\" (UID: \"b7433df4-440b-4a0a-ad8e-f0bdece26bc3\") " pod="openstack/nova-scheduler-0" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.230432 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlr4l\" (UniqueName: \"kubernetes.io/projected/b7433df4-440b-4a0a-ad8e-f0bdece26bc3-kube-api-access-vlr4l\") pod \"nova-scheduler-0\" (UID: \"b7433df4-440b-4a0a-ad8e-f0bdece26bc3\") " pod="openstack/nova-scheduler-0" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.230889 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7433df4-440b-4a0a-ad8e-f0bdece26bc3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b7433df4-440b-4a0a-ad8e-f0bdece26bc3\") " pod="openstack/nova-scheduler-0" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.230941 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7433df4-440b-4a0a-ad8e-f0bdece26bc3-config-data\") pod \"nova-scheduler-0\" (UID: \"b7433df4-440b-4a0a-ad8e-f0bdece26bc3\") " pod="openstack/nova-scheduler-0" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.236549 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.272893 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7433df4-440b-4a0a-ad8e-f0bdece26bc3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b7433df4-440b-4a0a-ad8e-f0bdece26bc3\") " pod="openstack/nova-scheduler-0" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.274567 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.274936 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7433df4-440b-4a0a-ad8e-f0bdece26bc3-config-data\") pod \"nova-scheduler-0\" (UID: \"b7433df4-440b-4a0a-ad8e-f0bdece26bc3\") " pod="openstack/nova-scheduler-0" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.316197 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.317695 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.317733 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlr4l\" (UniqueName: \"kubernetes.io/projected/b7433df4-440b-4a0a-ad8e-f0bdece26bc3-kube-api-access-vlr4l\") pod \"nova-scheduler-0\" (UID: \"b7433df4-440b-4a0a-ad8e-f0bdece26bc3\") " pod="openstack/nova-scheduler-0" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.323519 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.334009 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.341618 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.350604 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-4548m"] Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.352924 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-4548m" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.363577 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-4548m"] Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.432184 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.447559 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw479\" (UniqueName: \"kubernetes.io/projected/ee9d984c-be22-494f-b6f5-f1760daa47fa-kube-api-access-fw479\") pod \"nova-metadata-0\" (UID: \"ee9d984c-be22-494f-b6f5-f1760daa47fa\") " pod="openstack/nova-metadata-0" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.447624 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21d771a5-5ae8-4ed9-9572-6ff76bb713ec-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-4548m\" (UID: \"21d771a5-5ae8-4ed9-9572-6ff76bb713ec\") " pod="openstack/dnsmasq-dns-757b4f8459-4548m" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.447666 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21d771a5-5ae8-4ed9-9572-6ff76bb713ec-dns-svc\") pod \"dnsmasq-dns-757b4f8459-4548m\" (UID: \"21d771a5-5ae8-4ed9-9572-6ff76bb713ec\") " pod="openstack/dnsmasq-dns-757b4f8459-4548m" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.447732 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee9d984c-be22-494f-b6f5-f1760daa47fa-logs\") pod \"nova-metadata-0\" (UID: \"ee9d984c-be22-494f-b6f5-f1760daa47fa\") " pod="openstack/nova-metadata-0" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.447758 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffmcr\" (UniqueName: \"kubernetes.io/projected/21d771a5-5ae8-4ed9-9572-6ff76bb713ec-kube-api-access-ffmcr\") pod \"dnsmasq-dns-757b4f8459-4548m\" (UID: \"21d771a5-5ae8-4ed9-9572-6ff76bb713ec\") " pod="openstack/dnsmasq-dns-757b4f8459-4548m" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.447774 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee9d984c-be22-494f-b6f5-f1760daa47fa-config-data\") pod \"nova-metadata-0\" (UID: \"ee9d984c-be22-494f-b6f5-f1760daa47fa\") " pod="openstack/nova-metadata-0" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.447802 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee9d984c-be22-494f-b6f5-f1760daa47fa-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ee9d984c-be22-494f-b6f5-f1760daa47fa\") " pod="openstack/nova-metadata-0" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.447837 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21d771a5-5ae8-4ed9-9572-6ff76bb713ec-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-4548m\" (UID: \"21d771a5-5ae8-4ed9-9572-6ff76bb713ec\") " pod="openstack/dnsmasq-dns-757b4f8459-4548m" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.447853 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d771a5-5ae8-4ed9-9572-6ff76bb713ec-config\") pod \"dnsmasq-dns-757b4f8459-4548m\" (UID: \"21d771a5-5ae8-4ed9-9572-6ff76bb713ec\") " pod="openstack/dnsmasq-dns-757b4f8459-4548m" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.447878 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/21d771a5-5ae8-4ed9-9572-6ff76bb713ec-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-4548m\" (UID: \"21d771a5-5ae8-4ed9-9572-6ff76bb713ec\") " pod="openstack/dnsmasq-dns-757b4f8459-4548m" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.557044 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d771a5-5ae8-4ed9-9572-6ff76bb713ec-config\") pod \"dnsmasq-dns-757b4f8459-4548m\" (UID: \"21d771a5-5ae8-4ed9-9572-6ff76bb713ec\") " pod="openstack/dnsmasq-dns-757b4f8459-4548m" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.557075 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21d771a5-5ae8-4ed9-9572-6ff76bb713ec-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-4548m\" (UID: \"21d771a5-5ae8-4ed9-9572-6ff76bb713ec\") " pod="openstack/dnsmasq-dns-757b4f8459-4548m" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.557106 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/21d771a5-5ae8-4ed9-9572-6ff76bb713ec-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-4548m\" (UID: \"21d771a5-5ae8-4ed9-9572-6ff76bb713ec\") " pod="openstack/dnsmasq-dns-757b4f8459-4548m" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.557172 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw479\" (UniqueName: \"kubernetes.io/projected/ee9d984c-be22-494f-b6f5-f1760daa47fa-kube-api-access-fw479\") pod \"nova-metadata-0\" (UID: \"ee9d984c-be22-494f-b6f5-f1760daa47fa\") " pod="openstack/nova-metadata-0" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.557209 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21d771a5-5ae8-4ed9-9572-6ff76bb713ec-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-4548m\" (UID: \"21d771a5-5ae8-4ed9-9572-6ff76bb713ec\") " pod="openstack/dnsmasq-dns-757b4f8459-4548m" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.557248 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21d771a5-5ae8-4ed9-9572-6ff76bb713ec-dns-svc\") pod \"dnsmasq-dns-757b4f8459-4548m\" (UID: \"21d771a5-5ae8-4ed9-9572-6ff76bb713ec\") " pod="openstack/dnsmasq-dns-757b4f8459-4548m" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.557331 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee9d984c-be22-494f-b6f5-f1760daa47fa-logs\") pod \"nova-metadata-0\" (UID: \"ee9d984c-be22-494f-b6f5-f1760daa47fa\") " pod="openstack/nova-metadata-0" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.557355 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffmcr\" (UniqueName: \"kubernetes.io/projected/21d771a5-5ae8-4ed9-9572-6ff76bb713ec-kube-api-access-ffmcr\") pod \"dnsmasq-dns-757b4f8459-4548m\" (UID: \"21d771a5-5ae8-4ed9-9572-6ff76bb713ec\") " pod="openstack/dnsmasq-dns-757b4f8459-4548m" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.557386 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee9d984c-be22-494f-b6f5-f1760daa47fa-config-data\") pod \"nova-metadata-0\" (UID: \"ee9d984c-be22-494f-b6f5-f1760daa47fa\") " pod="openstack/nova-metadata-0" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.557450 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee9d984c-be22-494f-b6f5-f1760daa47fa-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ee9d984c-be22-494f-b6f5-f1760daa47fa\") " pod="openstack/nova-metadata-0" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.559199 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21d771a5-5ae8-4ed9-9572-6ff76bb713ec-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-4548m\" (UID: \"21d771a5-5ae8-4ed9-9572-6ff76bb713ec\") " pod="openstack/dnsmasq-dns-757b4f8459-4548m" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.559251 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/21d771a5-5ae8-4ed9-9572-6ff76bb713ec-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-4548m\" (UID: \"21d771a5-5ae8-4ed9-9572-6ff76bb713ec\") " pod="openstack/dnsmasq-dns-757b4f8459-4548m" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.559782 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21d771a5-5ae8-4ed9-9572-6ff76bb713ec-dns-svc\") pod \"dnsmasq-dns-757b4f8459-4548m\" (UID: \"21d771a5-5ae8-4ed9-9572-6ff76bb713ec\") " pod="openstack/dnsmasq-dns-757b4f8459-4548m" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.559871 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21d771a5-5ae8-4ed9-9572-6ff76bb713ec-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-4548m\" (UID: \"21d771a5-5ae8-4ed9-9572-6ff76bb713ec\") " pod="openstack/dnsmasq-dns-757b4f8459-4548m" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.560437 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d771a5-5ae8-4ed9-9572-6ff76bb713ec-config\") pod \"dnsmasq-dns-757b4f8459-4548m\" (UID: \"21d771a5-5ae8-4ed9-9572-6ff76bb713ec\") " pod="openstack/dnsmasq-dns-757b4f8459-4548m" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.560682 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee9d984c-be22-494f-b6f5-f1760daa47fa-logs\") pod \"nova-metadata-0\" (UID: \"ee9d984c-be22-494f-b6f5-f1760daa47fa\") " pod="openstack/nova-metadata-0" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.563477 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee9d984c-be22-494f-b6f5-f1760daa47fa-config-data\") pod \"nova-metadata-0\" (UID: \"ee9d984c-be22-494f-b6f5-f1760daa47fa\") " pod="openstack/nova-metadata-0" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.565453 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee9d984c-be22-494f-b6f5-f1760daa47fa-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ee9d984c-be22-494f-b6f5-f1760daa47fa\") " pod="openstack/nova-metadata-0" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.589965 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw479\" (UniqueName: \"kubernetes.io/projected/ee9d984c-be22-494f-b6f5-f1760daa47fa-kube-api-access-fw479\") pod \"nova-metadata-0\" (UID: \"ee9d984c-be22-494f-b6f5-f1760daa47fa\") " pod="openstack/nova-metadata-0" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.610834 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffmcr\" (UniqueName: \"kubernetes.io/projected/21d771a5-5ae8-4ed9-9572-6ff76bb713ec-kube-api-access-ffmcr\") pod \"dnsmasq-dns-757b4f8459-4548m\" (UID: \"21d771a5-5ae8-4ed9-9572-6ff76bb713ec\") " pod="openstack/dnsmasq-dns-757b4f8459-4548m" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.649488 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.681237 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-4548m" Feb 02 17:33:21 crc kubenswrapper[4858]: I0202 17:33:21.713052 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-8ckkf"] Feb 02 17:33:21 crc kubenswrapper[4858]: W0202 17:33:21.748667 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod449436cd_88ef_480a_9905_8b120f723f8f.slice/crio-7385c48b79a32b3cf51dfd833c82886f6c63a6fa8b68f53a184922d2e3da43e9 WatchSource:0}: Error finding container 7385c48b79a32b3cf51dfd833c82886f6c63a6fa8b68f53a184922d2e3da43e9: Status 404 returned error can't find the container with id 7385c48b79a32b3cf51dfd833c82886f6c63a6fa8b68f53a184922d2e3da43e9 Feb 02 17:33:22 crc kubenswrapper[4858]: I0202 17:33:22.033689 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-n46db"] Feb 02 17:33:22 crc kubenswrapper[4858]: I0202 17:33:22.035463 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-n46db" Feb 02 17:33:22 crc kubenswrapper[4858]: I0202 17:33:22.039888 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 02 17:33:22 crc kubenswrapper[4858]: I0202 17:33:22.050344 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 02 17:33:22 crc kubenswrapper[4858]: I0202 17:33:22.070708 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-n46db"] Feb 02 17:33:22 crc kubenswrapper[4858]: I0202 17:33:22.080791 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 17:33:22 crc kubenswrapper[4858]: I0202 17:33:22.169333 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2wvj\" (UniqueName: \"kubernetes.io/projected/9127c71e-926f-4e20-b766-a957645d7dc9-kube-api-access-x2wvj\") pod \"nova-cell1-conductor-db-sync-n46db\" (UID: \"9127c71e-926f-4e20-b766-a957645d7dc9\") " pod="openstack/nova-cell1-conductor-db-sync-n46db" Feb 02 17:33:22 crc kubenswrapper[4858]: I0202 17:33:22.169475 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9127c71e-926f-4e20-b766-a957645d7dc9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-n46db\" (UID: \"9127c71e-926f-4e20-b766-a957645d7dc9\") " pod="openstack/nova-cell1-conductor-db-sync-n46db" Feb 02 17:33:22 crc kubenswrapper[4858]: I0202 17:33:22.169513 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9127c71e-926f-4e20-b766-a957645d7dc9-scripts\") pod \"nova-cell1-conductor-db-sync-n46db\" (UID: \"9127c71e-926f-4e20-b766-a957645d7dc9\") " pod="openstack/nova-cell1-conductor-db-sync-n46db" Feb 02 17:33:22 crc kubenswrapper[4858]: I0202 17:33:22.169543 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9127c71e-926f-4e20-b766-a957645d7dc9-config-data\") pod \"nova-cell1-conductor-db-sync-n46db\" (UID: \"9127c71e-926f-4e20-b766-a957645d7dc9\") " pod="openstack/nova-cell1-conductor-db-sync-n46db" Feb 02 17:33:22 crc kubenswrapper[4858]: W0202 17:33:22.212088 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7433df4_440b_4a0a_ad8e_f0bdece26bc3.slice/crio-24378d50a94dd57cbe1e429fc47edb8663c5874e449fcc465da01bab97a1348a WatchSource:0}: Error finding container 24378d50a94dd57cbe1e429fc47edb8663c5874e449fcc465da01bab97a1348a: Status 404 returned error can't find the container with id 24378d50a94dd57cbe1e429fc47edb8663c5874e449fcc465da01bab97a1348a Feb 02 17:33:22 crc kubenswrapper[4858]: I0202 17:33:22.217995 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 17:33:22 crc kubenswrapper[4858]: I0202 17:33:22.271049 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2wvj\" (UniqueName: \"kubernetes.io/projected/9127c71e-926f-4e20-b766-a957645d7dc9-kube-api-access-x2wvj\") pod \"nova-cell1-conductor-db-sync-n46db\" (UID: \"9127c71e-926f-4e20-b766-a957645d7dc9\") " pod="openstack/nova-cell1-conductor-db-sync-n46db" Feb 02 17:33:22 crc kubenswrapper[4858]: I0202 17:33:22.271156 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9127c71e-926f-4e20-b766-a957645d7dc9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-n46db\" (UID: \"9127c71e-926f-4e20-b766-a957645d7dc9\") " pod="openstack/nova-cell1-conductor-db-sync-n46db" Feb 02 17:33:22 crc kubenswrapper[4858]: I0202 17:33:22.271209 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9127c71e-926f-4e20-b766-a957645d7dc9-scripts\") pod \"nova-cell1-conductor-db-sync-n46db\" (UID: \"9127c71e-926f-4e20-b766-a957645d7dc9\") " pod="openstack/nova-cell1-conductor-db-sync-n46db" Feb 02 17:33:22 crc kubenswrapper[4858]: I0202 17:33:22.271246 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9127c71e-926f-4e20-b766-a957645d7dc9-config-data\") pod \"nova-cell1-conductor-db-sync-n46db\" (UID: \"9127c71e-926f-4e20-b766-a957645d7dc9\") " pod="openstack/nova-cell1-conductor-db-sync-n46db" Feb 02 17:33:22 crc kubenswrapper[4858]: I0202 17:33:22.281924 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9127c71e-926f-4e20-b766-a957645d7dc9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-n46db\" (UID: \"9127c71e-926f-4e20-b766-a957645d7dc9\") " pod="openstack/nova-cell1-conductor-db-sync-n46db" Feb 02 17:33:22 crc kubenswrapper[4858]: I0202 17:33:22.289944 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9127c71e-926f-4e20-b766-a957645d7dc9-config-data\") pod \"nova-cell1-conductor-db-sync-n46db\" (UID: \"9127c71e-926f-4e20-b766-a957645d7dc9\") " pod="openstack/nova-cell1-conductor-db-sync-n46db" Feb 02 17:33:22 crc kubenswrapper[4858]: I0202 17:33:22.291226 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9127c71e-926f-4e20-b766-a957645d7dc9-scripts\") pod \"nova-cell1-conductor-db-sync-n46db\" (UID: \"9127c71e-926f-4e20-b766-a957645d7dc9\") " pod="openstack/nova-cell1-conductor-db-sync-n46db" Feb 02 17:33:22 crc kubenswrapper[4858]: I0202 17:33:22.298377 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2wvj\" (UniqueName: \"kubernetes.io/projected/9127c71e-926f-4e20-b766-a957645d7dc9-kube-api-access-x2wvj\") pod \"nova-cell1-conductor-db-sync-n46db\" (UID: \"9127c71e-926f-4e20-b766-a957645d7dc9\") " pod="openstack/nova-cell1-conductor-db-sync-n46db" Feb 02 17:33:22 crc kubenswrapper[4858]: I0202 17:33:22.383354 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-n46db" Feb 02 17:33:22 crc kubenswrapper[4858]: I0202 17:33:22.391089 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 17:33:22 crc kubenswrapper[4858]: I0202 17:33:22.480358 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 17:33:22 crc kubenswrapper[4858]: W0202 17:33:22.487004 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee9d984c_be22_494f_b6f5_f1760daa47fa.slice/crio-ba2b7a0b6acfaf42247706adec0900e99a72730b7b90feabd4c7e3c039def451 WatchSource:0}: Error finding container ba2b7a0b6acfaf42247706adec0900e99a72730b7b90feabd4c7e3c039def451: Status 404 returned error can't find the container with id ba2b7a0b6acfaf42247706adec0900e99a72730b7b90feabd4c7e3c039def451 Feb 02 17:33:22 crc kubenswrapper[4858]: I0202 17:33:22.606640 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9d767725-79eb-441b-9042-54ce10f9aa3b","Type":"ContainerStarted","Data":"6b9f872bfb05311d299602fe85b0da1b32a5bc5fb46721815ab1d76bd62da6ab"} Feb 02 17:33:22 crc kubenswrapper[4858]: I0202 17:33:22.608941 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-4548m"] Feb 02 17:33:22 crc kubenswrapper[4858]: I0202 17:33:22.616747 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b7433df4-440b-4a0a-ad8e-f0bdece26bc3","Type":"ContainerStarted","Data":"24378d50a94dd57cbe1e429fc47edb8663c5874e449fcc465da01bab97a1348a"} Feb 02 17:33:22 crc kubenswrapper[4858]: I0202 17:33:22.618208 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ee9d984c-be22-494f-b6f5-f1760daa47fa","Type":"ContainerStarted","Data":"ba2b7a0b6acfaf42247706adec0900e99a72730b7b90feabd4c7e3c039def451"} Feb 02 17:33:22 crc kubenswrapper[4858]: I0202 17:33:22.625049 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab3475a4-dc06-489e-a4de-ac9a204c5248","Type":"ContainerStarted","Data":"87a8835029baa720a68e63ab423f6ae00259824609f20fa6562fdcbcad043224"} Feb 02 17:33:22 crc kubenswrapper[4858]: I0202 17:33:22.625201 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 17:33:22 crc kubenswrapper[4858]: I0202 17:33:22.631709 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"aa9cda80-8059-476f-a7f3-710bb907548f","Type":"ContainerStarted","Data":"7a8ce7e3c97b7e3e2771fed04eb90ff6311e173fc2c8a879e3597d81d2b7b5d7"} Feb 02 17:33:22 crc kubenswrapper[4858]: I0202 17:33:22.634096 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8ckkf" event={"ID":"449436cd-88ef-480a-9905-8b120f723f8f","Type":"ContainerStarted","Data":"6e98a6c8eb16216683547a70ceb430cb4ecf23589c16b81ac4ded50890f97096"} Feb 02 17:33:22 crc kubenswrapper[4858]: I0202 17:33:22.634123 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8ckkf" event={"ID":"449436cd-88ef-480a-9905-8b120f723f8f","Type":"ContainerStarted","Data":"7385c48b79a32b3cf51dfd833c82886f6c63a6fa8b68f53a184922d2e3da43e9"} Feb 02 17:33:22 crc kubenswrapper[4858]: I0202 17:33:22.651333 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.131821614 podStartE2EDuration="7.65131495s" podCreationTimestamp="2026-02-02 17:33:15 +0000 UTC" firstStartedPulling="2026-02-02 17:33:16.446729201 +0000 UTC m=+1097.599144466" lastFinishedPulling="2026-02-02 17:33:20.966222537 +0000 UTC m=+1102.118637802" observedRunningTime="2026-02-02 17:33:22.648461969 +0000 UTC m=+1103.800877244" watchObservedRunningTime="2026-02-02 17:33:22.65131495 +0000 UTC m=+1103.803730225" Feb 02 17:33:22 crc kubenswrapper[4858]: I0202 17:33:22.674178 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-8ckkf" podStartSLOduration=2.674157286 podStartE2EDuration="2.674157286s" podCreationTimestamp="2026-02-02 17:33:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:33:22.671182022 +0000 UTC m=+1103.823597287" watchObservedRunningTime="2026-02-02 17:33:22.674157286 +0000 UTC m=+1103.826572551" Feb 02 17:33:22 crc kubenswrapper[4858]: I0202 17:33:22.998124 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-n46db"] Feb 02 17:33:23 crc kubenswrapper[4858]: W0202 17:33:23.027549 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9127c71e_926f_4e20_b766_a957645d7dc9.slice/crio-09eec7b9a0367761b67d472b8d0518d0dcca7d10f3b8158520cf3bed5807b10e WatchSource:0}: Error finding container 09eec7b9a0367761b67d472b8d0518d0dcca7d10f3b8158520cf3bed5807b10e: Status 404 returned error can't find the container with id 09eec7b9a0367761b67d472b8d0518d0dcca7d10f3b8158520cf3bed5807b10e Feb 02 17:33:23 crc kubenswrapper[4858]: I0202 17:33:23.647233 4858 generic.go:334] "Generic (PLEG): container finished" podID="21d771a5-5ae8-4ed9-9572-6ff76bb713ec" containerID="5e07a949ccd640a6d267d44f0bfc790cda7be8eb82016fad2ddd1b656ea5b06d" exitCode=0 Feb 02 17:33:23 crc kubenswrapper[4858]: I0202 17:33:23.647552 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-4548m" event={"ID":"21d771a5-5ae8-4ed9-9572-6ff76bb713ec","Type":"ContainerDied","Data":"5e07a949ccd640a6d267d44f0bfc790cda7be8eb82016fad2ddd1b656ea5b06d"} Feb 02 17:33:23 crc kubenswrapper[4858]: I0202 17:33:23.647588 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-4548m" event={"ID":"21d771a5-5ae8-4ed9-9572-6ff76bb713ec","Type":"ContainerStarted","Data":"fc962a06f6d95111c7107f8af7d692763ddfa870a7e113e3a64141ffd6e637b1"} Feb 02 17:33:23 crc kubenswrapper[4858]: I0202 17:33:23.656809 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-n46db" event={"ID":"9127c71e-926f-4e20-b766-a957645d7dc9","Type":"ContainerStarted","Data":"48f72f956691769808795f3d948f0ee552f304181e03248add235e327356b436"} Feb 02 17:33:23 crc kubenswrapper[4858]: I0202 17:33:23.656843 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-n46db" event={"ID":"9127c71e-926f-4e20-b766-a957645d7dc9","Type":"ContainerStarted","Data":"09eec7b9a0367761b67d472b8d0518d0dcca7d10f3b8158520cf3bed5807b10e"} Feb 02 17:33:23 crc kubenswrapper[4858]: I0202 17:33:23.697727 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-n46db" podStartSLOduration=1.69771049 podStartE2EDuration="1.69771049s" podCreationTimestamp="2026-02-02 17:33:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:33:23.684192237 +0000 UTC m=+1104.836607512" watchObservedRunningTime="2026-02-02 17:33:23.69771049 +0000 UTC m=+1104.850125755" Feb 02 17:33:24 crc kubenswrapper[4858]: I0202 17:33:24.659218 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 17:33:24 crc kubenswrapper[4858]: I0202 17:33:24.674634 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 17:33:26 crc kubenswrapper[4858]: I0202 17:33:26.687744 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b7433df4-440b-4a0a-ad8e-f0bdece26bc3","Type":"ContainerStarted","Data":"bff217f7b8652bd99a3e1e5e9553de6c779f55ebefb49f612aff97fb14586a27"} Feb 02 17:33:26 crc kubenswrapper[4858]: I0202 17:33:26.689961 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ee9d984c-be22-494f-b6f5-f1760daa47fa","Type":"ContainerStarted","Data":"066f9c562111016c175187d0f894940de8952c7cadf4d95f5d33bc5504e7533f"} Feb 02 17:33:26 crc kubenswrapper[4858]: I0202 17:33:26.690017 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ee9d984c-be22-494f-b6f5-f1760daa47fa","Type":"ContainerStarted","Data":"6c437d8fc14dad97232d1225abfcf171cf92df12bf5c442a127d23fbf2dd8949"} Feb 02 17:33:26 crc kubenswrapper[4858]: I0202 17:33:26.690088 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ee9d984c-be22-494f-b6f5-f1760daa47fa" containerName="nova-metadata-log" containerID="cri-o://6c437d8fc14dad97232d1225abfcf171cf92df12bf5c442a127d23fbf2dd8949" gracePeriod=30 Feb 02 17:33:26 crc kubenswrapper[4858]: I0202 17:33:26.690133 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ee9d984c-be22-494f-b6f5-f1760daa47fa" containerName="nova-metadata-metadata" containerID="cri-o://066f9c562111016c175187d0f894940de8952c7cadf4d95f5d33bc5504e7533f" gracePeriod=30 Feb 02 17:33:26 crc kubenswrapper[4858]: I0202 17:33:26.696812 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-4548m" event={"ID":"21d771a5-5ae8-4ed9-9572-6ff76bb713ec","Type":"ContainerStarted","Data":"6541c5a84a5c1f43a592ced83985462dcf67be32d2f7d625403474f651f3bfdb"} Feb 02 17:33:26 crc kubenswrapper[4858]: I0202 17:33:26.697800 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-4548m" Feb 02 17:33:26 crc kubenswrapper[4858]: I0202 17:33:26.699749 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"aa9cda80-8059-476f-a7f3-710bb907548f","Type":"ContainerStarted","Data":"7bca33acfbeab541042e8af6c4b6902a26f4672cf55fc38cedae257de965cc89"} Feb 02 17:33:26 crc kubenswrapper[4858]: I0202 17:33:26.699888 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="aa9cda80-8059-476f-a7f3-710bb907548f" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://7bca33acfbeab541042e8af6c4b6902a26f4672cf55fc38cedae257de965cc89" gracePeriod=30 Feb 02 17:33:26 crc kubenswrapper[4858]: I0202 17:33:26.712534 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.145410321 podStartE2EDuration="6.712511578s" podCreationTimestamp="2026-02-02 17:33:20 +0000 UTC" firstStartedPulling="2026-02-02 17:33:22.216583718 +0000 UTC m=+1103.368998983" lastFinishedPulling="2026-02-02 17:33:25.783684975 +0000 UTC m=+1106.936100240" observedRunningTime="2026-02-02 17:33:26.706275812 +0000 UTC m=+1107.858691077" watchObservedRunningTime="2026-02-02 17:33:26.712511578 +0000 UTC m=+1107.864926853" Feb 02 17:33:26 crc kubenswrapper[4858]: I0202 17:33:26.724419 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9d767725-79eb-441b-9042-54ce10f9aa3b","Type":"ContainerStarted","Data":"166076aed6724411e73559ac8470e24d21f8a9b2fe67a9fd686528677d9d50fb"} Feb 02 17:33:26 crc kubenswrapper[4858]: I0202 17:33:26.724477 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9d767725-79eb-441b-9042-54ce10f9aa3b","Type":"ContainerStarted","Data":"4afda9528e1a2235d4f255e48411cdd62301abb2eb401a418311a3b79c1fd0c1"} Feb 02 17:33:26 crc kubenswrapper[4858]: I0202 17:33:26.837526 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-4548m" podStartSLOduration=5.837497585 podStartE2EDuration="5.837497585s" podCreationTimestamp="2026-02-02 17:33:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:33:26.781666695 +0000 UTC m=+1107.934081960" watchObservedRunningTime="2026-02-02 17:33:26.837497585 +0000 UTC m=+1107.989912850" Feb 02 17:33:26 crc kubenswrapper[4858]: I0202 17:33:26.848283 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.494649304 podStartE2EDuration="6.84826022s" podCreationTimestamp="2026-02-02 17:33:20 +0000 UTC" firstStartedPulling="2026-02-02 17:33:22.430706517 +0000 UTC m=+1103.583121782" lastFinishedPulling="2026-02-02 17:33:25.784317433 +0000 UTC m=+1106.936732698" observedRunningTime="2026-02-02 17:33:26.800898969 +0000 UTC m=+1107.953314254" watchObservedRunningTime="2026-02-02 17:33:26.84826022 +0000 UTC m=+1108.000675495" Feb 02 17:33:26 crc kubenswrapper[4858]: I0202 17:33:26.870700 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.60195642 podStartE2EDuration="5.870677544s" podCreationTimestamp="2026-02-02 17:33:21 +0000 UTC" firstStartedPulling="2026-02-02 17:33:22.512530603 +0000 UTC m=+1103.664945868" lastFinishedPulling="2026-02-02 17:33:25.781251727 +0000 UTC m=+1106.933666992" observedRunningTime="2026-02-02 17:33:26.856159963 +0000 UTC m=+1108.008575228" watchObservedRunningTime="2026-02-02 17:33:26.870677544 +0000 UTC m=+1108.023092809" Feb 02 17:33:26 crc kubenswrapper[4858]: I0202 17:33:26.895936 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.17712608 podStartE2EDuration="6.895907808s" podCreationTimestamp="2026-02-02 17:33:20 +0000 UTC" firstStartedPulling="2026-02-02 17:33:22.07135401 +0000 UTC m=+1103.223769275" lastFinishedPulling="2026-02-02 17:33:25.790135738 +0000 UTC m=+1106.942551003" observedRunningTime="2026-02-02 17:33:26.892641356 +0000 UTC m=+1108.045056631" watchObservedRunningTime="2026-02-02 17:33:26.895907808 +0000 UTC m=+1108.048323073" Feb 02 17:33:27 crc kubenswrapper[4858]: I0202 17:33:27.735320 4858 generic.go:334] "Generic (PLEG): container finished" podID="ee9d984c-be22-494f-b6f5-f1760daa47fa" containerID="6c437d8fc14dad97232d1225abfcf171cf92df12bf5c442a127d23fbf2dd8949" exitCode=143 Feb 02 17:33:27 crc kubenswrapper[4858]: I0202 17:33:27.735360 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ee9d984c-be22-494f-b6f5-f1760daa47fa","Type":"ContainerDied","Data":"6c437d8fc14dad97232d1225abfcf171cf92df12bf5c442a127d23fbf2dd8949"} Feb 02 17:33:28 crc kubenswrapper[4858]: I0202 17:33:28.636693 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 17:33:28 crc kubenswrapper[4858]: I0202 17:33:28.739278 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee9d984c-be22-494f-b6f5-f1760daa47fa-logs\") pod \"ee9d984c-be22-494f-b6f5-f1760daa47fa\" (UID: \"ee9d984c-be22-494f-b6f5-f1760daa47fa\") " Feb 02 17:33:28 crc kubenswrapper[4858]: I0202 17:33:28.739509 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fw479\" (UniqueName: \"kubernetes.io/projected/ee9d984c-be22-494f-b6f5-f1760daa47fa-kube-api-access-fw479\") pod \"ee9d984c-be22-494f-b6f5-f1760daa47fa\" (UID: \"ee9d984c-be22-494f-b6f5-f1760daa47fa\") " Feb 02 17:33:28 crc kubenswrapper[4858]: I0202 17:33:28.739571 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee9d984c-be22-494f-b6f5-f1760daa47fa-config-data\") pod \"ee9d984c-be22-494f-b6f5-f1760daa47fa\" (UID: \"ee9d984c-be22-494f-b6f5-f1760daa47fa\") " Feb 02 17:33:28 crc kubenswrapper[4858]: I0202 17:33:28.739615 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee9d984c-be22-494f-b6f5-f1760daa47fa-combined-ca-bundle\") pod \"ee9d984c-be22-494f-b6f5-f1760daa47fa\" (UID: \"ee9d984c-be22-494f-b6f5-f1760daa47fa\") " Feb 02 17:33:28 crc kubenswrapper[4858]: I0202 17:33:28.739879 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee9d984c-be22-494f-b6f5-f1760daa47fa-logs" (OuterVolumeSpecName: "logs") pod "ee9d984c-be22-494f-b6f5-f1760daa47fa" (UID: "ee9d984c-be22-494f-b6f5-f1760daa47fa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:33:28 crc kubenswrapper[4858]: I0202 17:33:28.740445 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee9d984c-be22-494f-b6f5-f1760daa47fa-logs\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:28 crc kubenswrapper[4858]: I0202 17:33:28.759824 4858 generic.go:334] "Generic (PLEG): container finished" podID="ee9d984c-be22-494f-b6f5-f1760daa47fa" containerID="066f9c562111016c175187d0f894940de8952c7cadf4d95f5d33bc5504e7533f" exitCode=0 Feb 02 17:33:28 crc kubenswrapper[4858]: I0202 17:33:28.760455 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 17:33:28 crc kubenswrapper[4858]: I0202 17:33:28.761067 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ee9d984c-be22-494f-b6f5-f1760daa47fa","Type":"ContainerDied","Data":"066f9c562111016c175187d0f894940de8952c7cadf4d95f5d33bc5504e7533f"} Feb 02 17:33:28 crc kubenswrapper[4858]: I0202 17:33:28.761103 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ee9d984c-be22-494f-b6f5-f1760daa47fa","Type":"ContainerDied","Data":"ba2b7a0b6acfaf42247706adec0900e99a72730b7b90feabd4c7e3c039def451"} Feb 02 17:33:28 crc kubenswrapper[4858]: I0202 17:33:28.761153 4858 scope.go:117] "RemoveContainer" containerID="066f9c562111016c175187d0f894940de8952c7cadf4d95f5d33bc5504e7533f" Feb 02 17:33:28 crc kubenswrapper[4858]: I0202 17:33:28.766163 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee9d984c-be22-494f-b6f5-f1760daa47fa-kube-api-access-fw479" (OuterVolumeSpecName: "kube-api-access-fw479") pod "ee9d984c-be22-494f-b6f5-f1760daa47fa" (UID: "ee9d984c-be22-494f-b6f5-f1760daa47fa"). InnerVolumeSpecName "kube-api-access-fw479". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:33:28 crc kubenswrapper[4858]: I0202 17:33:28.802634 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee9d984c-be22-494f-b6f5-f1760daa47fa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee9d984c-be22-494f-b6f5-f1760daa47fa" (UID: "ee9d984c-be22-494f-b6f5-f1760daa47fa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:33:28 crc kubenswrapper[4858]: I0202 17:33:28.803364 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee9d984c-be22-494f-b6f5-f1760daa47fa-config-data" (OuterVolumeSpecName: "config-data") pod "ee9d984c-be22-494f-b6f5-f1760daa47fa" (UID: "ee9d984c-be22-494f-b6f5-f1760daa47fa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:33:28 crc kubenswrapper[4858]: I0202 17:33:28.842440 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fw479\" (UniqueName: \"kubernetes.io/projected/ee9d984c-be22-494f-b6f5-f1760daa47fa-kube-api-access-fw479\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:28 crc kubenswrapper[4858]: I0202 17:33:28.842476 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee9d984c-be22-494f-b6f5-f1760daa47fa-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:28 crc kubenswrapper[4858]: I0202 17:33:28.842485 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee9d984c-be22-494f-b6f5-f1760daa47fa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:28 crc kubenswrapper[4858]: I0202 17:33:28.865162 4858 scope.go:117] "RemoveContainer" containerID="6c437d8fc14dad97232d1225abfcf171cf92df12bf5c442a127d23fbf2dd8949" Feb 02 17:33:28 crc kubenswrapper[4858]: I0202 17:33:28.882549 4858 scope.go:117] "RemoveContainer" containerID="066f9c562111016c175187d0f894940de8952c7cadf4d95f5d33bc5504e7533f" Feb 02 17:33:28 crc kubenswrapper[4858]: E0202 17:33:28.883014 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"066f9c562111016c175187d0f894940de8952c7cadf4d95f5d33bc5504e7533f\": container with ID starting with 066f9c562111016c175187d0f894940de8952c7cadf4d95f5d33bc5504e7533f not found: ID does not exist" containerID="066f9c562111016c175187d0f894940de8952c7cadf4d95f5d33bc5504e7533f" Feb 02 17:33:28 crc kubenswrapper[4858]: I0202 17:33:28.883081 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"066f9c562111016c175187d0f894940de8952c7cadf4d95f5d33bc5504e7533f"} err="failed to get container status \"066f9c562111016c175187d0f894940de8952c7cadf4d95f5d33bc5504e7533f\": rpc error: code = NotFound desc = could not find container \"066f9c562111016c175187d0f894940de8952c7cadf4d95f5d33bc5504e7533f\": container with ID starting with 066f9c562111016c175187d0f894940de8952c7cadf4d95f5d33bc5504e7533f not found: ID does not exist" Feb 02 17:33:28 crc kubenswrapper[4858]: I0202 17:33:28.883103 4858 scope.go:117] "RemoveContainer" containerID="6c437d8fc14dad97232d1225abfcf171cf92df12bf5c442a127d23fbf2dd8949" Feb 02 17:33:28 crc kubenswrapper[4858]: E0202 17:33:28.883312 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c437d8fc14dad97232d1225abfcf171cf92df12bf5c442a127d23fbf2dd8949\": container with ID starting with 6c437d8fc14dad97232d1225abfcf171cf92df12bf5c442a127d23fbf2dd8949 not found: ID does not exist" containerID="6c437d8fc14dad97232d1225abfcf171cf92df12bf5c442a127d23fbf2dd8949" Feb 02 17:33:28 crc kubenswrapper[4858]: I0202 17:33:28.883342 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c437d8fc14dad97232d1225abfcf171cf92df12bf5c442a127d23fbf2dd8949"} err="failed to get container status \"6c437d8fc14dad97232d1225abfcf171cf92df12bf5c442a127d23fbf2dd8949\": rpc error: code = NotFound desc = could not find container \"6c437d8fc14dad97232d1225abfcf171cf92df12bf5c442a127d23fbf2dd8949\": container with ID starting with 6c437d8fc14dad97232d1225abfcf171cf92df12bf5c442a127d23fbf2dd8949 not found: ID does not exist" Feb 02 17:33:29 crc kubenswrapper[4858]: I0202 17:33:29.101422 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 17:33:29 crc kubenswrapper[4858]: I0202 17:33:29.119110 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 17:33:29 crc kubenswrapper[4858]: I0202 17:33:29.130106 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 17:33:29 crc kubenswrapper[4858]: E0202 17:33:29.130563 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee9d984c-be22-494f-b6f5-f1760daa47fa" containerName="nova-metadata-log" Feb 02 17:33:29 crc kubenswrapper[4858]: I0202 17:33:29.130583 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee9d984c-be22-494f-b6f5-f1760daa47fa" containerName="nova-metadata-log" Feb 02 17:33:29 crc kubenswrapper[4858]: E0202 17:33:29.130599 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee9d984c-be22-494f-b6f5-f1760daa47fa" containerName="nova-metadata-metadata" Feb 02 17:33:29 crc kubenswrapper[4858]: I0202 17:33:29.130607 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee9d984c-be22-494f-b6f5-f1760daa47fa" containerName="nova-metadata-metadata" Feb 02 17:33:29 crc kubenswrapper[4858]: I0202 17:33:29.130869 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee9d984c-be22-494f-b6f5-f1760daa47fa" containerName="nova-metadata-log" Feb 02 17:33:29 crc kubenswrapper[4858]: I0202 17:33:29.130909 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee9d984c-be22-494f-b6f5-f1760daa47fa" containerName="nova-metadata-metadata" Feb 02 17:33:29 crc kubenswrapper[4858]: I0202 17:33:29.132133 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 17:33:29 crc kubenswrapper[4858]: I0202 17:33:29.137816 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 17:33:29 crc kubenswrapper[4858]: I0202 17:33:29.138027 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 02 17:33:29 crc kubenswrapper[4858]: I0202 17:33:29.163303 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 17:33:29 crc kubenswrapper[4858]: I0202 17:33:29.249611 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8efb6ba-0288-4fff-8399-8d263bcba580-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f8efb6ba-0288-4fff-8399-8d263bcba580\") " pod="openstack/nova-metadata-0" Feb 02 17:33:29 crc kubenswrapper[4858]: I0202 17:33:29.249710 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8efb6ba-0288-4fff-8399-8d263bcba580-logs\") pod \"nova-metadata-0\" (UID: \"f8efb6ba-0288-4fff-8399-8d263bcba580\") " pod="openstack/nova-metadata-0" Feb 02 17:33:29 crc kubenswrapper[4858]: I0202 17:33:29.249743 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8efb6ba-0288-4fff-8399-8d263bcba580-config-data\") pod \"nova-metadata-0\" (UID: \"f8efb6ba-0288-4fff-8399-8d263bcba580\") " pod="openstack/nova-metadata-0" Feb 02 17:33:29 crc kubenswrapper[4858]: I0202 17:33:29.249759 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8efb6ba-0288-4fff-8399-8d263bcba580-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f8efb6ba-0288-4fff-8399-8d263bcba580\") " pod="openstack/nova-metadata-0" Feb 02 17:33:29 crc kubenswrapper[4858]: I0202 17:33:29.249827 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb48v\" (UniqueName: \"kubernetes.io/projected/f8efb6ba-0288-4fff-8399-8d263bcba580-kube-api-access-mb48v\") pod \"nova-metadata-0\" (UID: \"f8efb6ba-0288-4fff-8399-8d263bcba580\") " pod="openstack/nova-metadata-0" Feb 02 17:33:29 crc kubenswrapper[4858]: I0202 17:33:29.352236 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8efb6ba-0288-4fff-8399-8d263bcba580-logs\") pod \"nova-metadata-0\" (UID: \"f8efb6ba-0288-4fff-8399-8d263bcba580\") " pod="openstack/nova-metadata-0" Feb 02 17:33:29 crc kubenswrapper[4858]: I0202 17:33:29.352332 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8efb6ba-0288-4fff-8399-8d263bcba580-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f8efb6ba-0288-4fff-8399-8d263bcba580\") " pod="openstack/nova-metadata-0" Feb 02 17:33:29 crc kubenswrapper[4858]: I0202 17:33:29.352365 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8efb6ba-0288-4fff-8399-8d263bcba580-config-data\") pod \"nova-metadata-0\" (UID: \"f8efb6ba-0288-4fff-8399-8d263bcba580\") " pod="openstack/nova-metadata-0" Feb 02 17:33:29 crc kubenswrapper[4858]: I0202 17:33:29.352468 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb48v\" (UniqueName: \"kubernetes.io/projected/f8efb6ba-0288-4fff-8399-8d263bcba580-kube-api-access-mb48v\") pod \"nova-metadata-0\" (UID: \"f8efb6ba-0288-4fff-8399-8d263bcba580\") " pod="openstack/nova-metadata-0" Feb 02 17:33:29 crc kubenswrapper[4858]: I0202 17:33:29.352538 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8efb6ba-0288-4fff-8399-8d263bcba580-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f8efb6ba-0288-4fff-8399-8d263bcba580\") " pod="openstack/nova-metadata-0" Feb 02 17:33:29 crc kubenswrapper[4858]: I0202 17:33:29.352626 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8efb6ba-0288-4fff-8399-8d263bcba580-logs\") pod \"nova-metadata-0\" (UID: \"f8efb6ba-0288-4fff-8399-8d263bcba580\") " pod="openstack/nova-metadata-0" Feb 02 17:33:29 crc kubenswrapper[4858]: I0202 17:33:29.357601 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8efb6ba-0288-4fff-8399-8d263bcba580-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f8efb6ba-0288-4fff-8399-8d263bcba580\") " pod="openstack/nova-metadata-0" Feb 02 17:33:29 crc kubenswrapper[4858]: I0202 17:33:29.364510 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8efb6ba-0288-4fff-8399-8d263bcba580-config-data\") pod \"nova-metadata-0\" (UID: \"f8efb6ba-0288-4fff-8399-8d263bcba580\") " pod="openstack/nova-metadata-0" Feb 02 17:33:29 crc kubenswrapper[4858]: I0202 17:33:29.367525 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8efb6ba-0288-4fff-8399-8d263bcba580-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f8efb6ba-0288-4fff-8399-8d263bcba580\") " pod="openstack/nova-metadata-0" Feb 02 17:33:29 crc kubenswrapper[4858]: I0202 17:33:29.373024 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb48v\" (UniqueName: \"kubernetes.io/projected/f8efb6ba-0288-4fff-8399-8d263bcba580-kube-api-access-mb48v\") pod \"nova-metadata-0\" (UID: \"f8efb6ba-0288-4fff-8399-8d263bcba580\") " pod="openstack/nova-metadata-0" Feb 02 17:33:29 crc kubenswrapper[4858]: I0202 17:33:29.458502 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 17:33:29 crc kubenswrapper[4858]: I0202 17:33:29.934631 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 17:33:29 crc kubenswrapper[4858]: W0202 17:33:29.950045 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8efb6ba_0288_4fff_8399_8d263bcba580.slice/crio-c749c790a674b59836f2a7ad7b5eee728c5469f0e84a38b908f5e11d2e58075e WatchSource:0}: Error finding container c749c790a674b59836f2a7ad7b5eee728c5469f0e84a38b908f5e11d2e58075e: Status 404 returned error can't find the container with id c749c790a674b59836f2a7ad7b5eee728c5469f0e84a38b908f5e11d2e58075e Feb 02 17:33:30 crc kubenswrapper[4858]: I0202 17:33:30.417849 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee9d984c-be22-494f-b6f5-f1760daa47fa" path="/var/lib/kubelet/pods/ee9d984c-be22-494f-b6f5-f1760daa47fa/volumes" Feb 02 17:33:30 crc kubenswrapper[4858]: I0202 17:33:30.781105 4858 generic.go:334] "Generic (PLEG): container finished" podID="449436cd-88ef-480a-9905-8b120f723f8f" containerID="6e98a6c8eb16216683547a70ceb430cb4ecf23589c16b81ac4ded50890f97096" exitCode=0 Feb 02 17:33:30 crc kubenswrapper[4858]: I0202 17:33:30.781191 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8ckkf" event={"ID":"449436cd-88ef-480a-9905-8b120f723f8f","Type":"ContainerDied","Data":"6e98a6c8eb16216683547a70ceb430cb4ecf23589c16b81ac4ded50890f97096"} Feb 02 17:33:30 crc kubenswrapper[4858]: I0202 17:33:30.784126 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f8efb6ba-0288-4fff-8399-8d263bcba580","Type":"ContainerStarted","Data":"a1d9fffa0a999c1b5de0e82f7d44b5d1e2ec5da55e08082461ac107b6cfb4c41"} Feb 02 17:33:30 crc kubenswrapper[4858]: I0202 17:33:30.784173 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f8efb6ba-0288-4fff-8399-8d263bcba580","Type":"ContainerStarted","Data":"afb94beb344d3f7913a6c85f621aaf90d02b09df560479d2f8908a23167d9d9f"} Feb 02 17:33:30 crc kubenswrapper[4858]: I0202 17:33:30.784185 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f8efb6ba-0288-4fff-8399-8d263bcba580","Type":"ContainerStarted","Data":"c749c790a674b59836f2a7ad7b5eee728c5469f0e84a38b908f5e11d2e58075e"} Feb 02 17:33:31 crc kubenswrapper[4858]: I0202 17:33:31.276431 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 17:33:31 crc kubenswrapper[4858]: I0202 17:33:31.276507 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 17:33:31 crc kubenswrapper[4858]: I0202 17:33:31.335937 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 02 17:33:31 crc kubenswrapper[4858]: I0202 17:33:31.433354 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 02 17:33:31 crc kubenswrapper[4858]: I0202 17:33:31.433729 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 02 17:33:31 crc kubenswrapper[4858]: I0202 17:33:31.480907 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 02 17:33:31 crc kubenswrapper[4858]: I0202 17:33:31.512007 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.5119380270000002 podStartE2EDuration="2.511938027s" podCreationTimestamp="2026-02-02 17:33:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:33:30.830501934 +0000 UTC m=+1111.982917219" watchObservedRunningTime="2026-02-02 17:33:31.511938027 +0000 UTC m=+1112.664353322" Feb 02 17:33:31 crc kubenswrapper[4858]: I0202 17:33:31.683259 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-4548m" Feb 02 17:33:31 crc kubenswrapper[4858]: I0202 17:33:31.771155 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-rk27f"] Feb 02 17:33:31 crc kubenswrapper[4858]: I0202 17:33:31.771382 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-rk27f" podUID="be2cff80-fb1c-4421-a892-a140ab4e7dec" containerName="dnsmasq-dns" containerID="cri-o://1ac1e831913db6c85c069349536f5e09fc41a2e3f94be33d1f755715e18e3c17" gracePeriod=10 Feb 02 17:33:31 crc kubenswrapper[4858]: I0202 17:33:31.803488 4858 generic.go:334] "Generic (PLEG): container finished" podID="9127c71e-926f-4e20-b766-a957645d7dc9" containerID="48f72f956691769808795f3d948f0ee552f304181e03248add235e327356b436" exitCode=0 Feb 02 17:33:31 crc kubenswrapper[4858]: I0202 17:33:31.803686 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-n46db" event={"ID":"9127c71e-926f-4e20-b766-a957645d7dc9","Type":"ContainerDied","Data":"48f72f956691769808795f3d948f0ee552f304181e03248add235e327356b436"} Feb 02 17:33:31 crc kubenswrapper[4858]: I0202 17:33:31.859738 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.265758 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8ckkf" Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.361465 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9d767725-79eb-441b-9042-54ce10f9aa3b" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.191:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.361842 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9d767725-79eb-441b-9042-54ce10f9aa3b" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.191:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.386469 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-rk27f" Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.411725 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/449436cd-88ef-480a-9905-8b120f723f8f-combined-ca-bundle\") pod \"449436cd-88ef-480a-9905-8b120f723f8f\" (UID: \"449436cd-88ef-480a-9905-8b120f723f8f\") " Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.411845 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/449436cd-88ef-480a-9905-8b120f723f8f-scripts\") pod \"449436cd-88ef-480a-9905-8b120f723f8f\" (UID: \"449436cd-88ef-480a-9905-8b120f723f8f\") " Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.411942 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6jqp\" (UniqueName: \"kubernetes.io/projected/449436cd-88ef-480a-9905-8b120f723f8f-kube-api-access-g6jqp\") pod \"449436cd-88ef-480a-9905-8b120f723f8f\" (UID: \"449436cd-88ef-480a-9905-8b120f723f8f\") " Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.412117 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/449436cd-88ef-480a-9905-8b120f723f8f-config-data\") pod \"449436cd-88ef-480a-9905-8b120f723f8f\" (UID: \"449436cd-88ef-480a-9905-8b120f723f8f\") " Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.422780 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/449436cd-88ef-480a-9905-8b120f723f8f-scripts" (OuterVolumeSpecName: "scripts") pod "449436cd-88ef-480a-9905-8b120f723f8f" (UID: "449436cd-88ef-480a-9905-8b120f723f8f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.431266 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/449436cd-88ef-480a-9905-8b120f723f8f-kube-api-access-g6jqp" (OuterVolumeSpecName: "kube-api-access-g6jqp") pod "449436cd-88ef-480a-9905-8b120f723f8f" (UID: "449436cd-88ef-480a-9905-8b120f723f8f"). InnerVolumeSpecName "kube-api-access-g6jqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.446436 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/449436cd-88ef-480a-9905-8b120f723f8f-config-data" (OuterVolumeSpecName: "config-data") pod "449436cd-88ef-480a-9905-8b120f723f8f" (UID: "449436cd-88ef-480a-9905-8b120f723f8f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.460056 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/449436cd-88ef-480a-9905-8b120f723f8f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "449436cd-88ef-480a-9905-8b120f723f8f" (UID: "449436cd-88ef-480a-9905-8b120f723f8f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.513796 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be2cff80-fb1c-4421-a892-a140ab4e7dec-ovsdbserver-sb\") pod \"be2cff80-fb1c-4421-a892-a140ab4e7dec\" (UID: \"be2cff80-fb1c-4421-a892-a140ab4e7dec\") " Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.513857 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be2cff80-fb1c-4421-a892-a140ab4e7dec-dns-swift-storage-0\") pod \"be2cff80-fb1c-4421-a892-a140ab4e7dec\" (UID: \"be2cff80-fb1c-4421-a892-a140ab4e7dec\") " Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.514401 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be2cff80-fb1c-4421-a892-a140ab4e7dec-ovsdbserver-nb\") pod \"be2cff80-fb1c-4421-a892-a140ab4e7dec\" (UID: \"be2cff80-fb1c-4421-a892-a140ab4e7dec\") " Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.514454 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be2cff80-fb1c-4421-a892-a140ab4e7dec-dns-svc\") pod \"be2cff80-fb1c-4421-a892-a140ab4e7dec\" (UID: \"be2cff80-fb1c-4421-a892-a140ab4e7dec\") " Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.514667 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be2cff80-fb1c-4421-a892-a140ab4e7dec-config\") pod \"be2cff80-fb1c-4421-a892-a140ab4e7dec\" (UID: \"be2cff80-fb1c-4421-a892-a140ab4e7dec\") " Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.514713 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxq2r\" (UniqueName: \"kubernetes.io/projected/be2cff80-fb1c-4421-a892-a140ab4e7dec-kube-api-access-zxq2r\") pod \"be2cff80-fb1c-4421-a892-a140ab4e7dec\" (UID: \"be2cff80-fb1c-4421-a892-a140ab4e7dec\") " Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.515284 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/449436cd-88ef-480a-9905-8b120f723f8f-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.515308 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/449436cd-88ef-480a-9905-8b120f723f8f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.515318 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/449436cd-88ef-480a-9905-8b120f723f8f-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.515329 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6jqp\" (UniqueName: \"kubernetes.io/projected/449436cd-88ef-480a-9905-8b120f723f8f-kube-api-access-g6jqp\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.521124 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be2cff80-fb1c-4421-a892-a140ab4e7dec-kube-api-access-zxq2r" (OuterVolumeSpecName: "kube-api-access-zxq2r") pod "be2cff80-fb1c-4421-a892-a140ab4e7dec" (UID: "be2cff80-fb1c-4421-a892-a140ab4e7dec"). InnerVolumeSpecName "kube-api-access-zxq2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.571163 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be2cff80-fb1c-4421-a892-a140ab4e7dec-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "be2cff80-fb1c-4421-a892-a140ab4e7dec" (UID: "be2cff80-fb1c-4421-a892-a140ab4e7dec"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.574683 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be2cff80-fb1c-4421-a892-a140ab4e7dec-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "be2cff80-fb1c-4421-a892-a140ab4e7dec" (UID: "be2cff80-fb1c-4421-a892-a140ab4e7dec"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.582639 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be2cff80-fb1c-4421-a892-a140ab4e7dec-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "be2cff80-fb1c-4421-a892-a140ab4e7dec" (UID: "be2cff80-fb1c-4421-a892-a140ab4e7dec"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.587812 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be2cff80-fb1c-4421-a892-a140ab4e7dec-config" (OuterVolumeSpecName: "config") pod "be2cff80-fb1c-4421-a892-a140ab4e7dec" (UID: "be2cff80-fb1c-4421-a892-a140ab4e7dec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.589628 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be2cff80-fb1c-4421-a892-a140ab4e7dec-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "be2cff80-fb1c-4421-a892-a140ab4e7dec" (UID: "be2cff80-fb1c-4421-a892-a140ab4e7dec"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.617199 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be2cff80-fb1c-4421-a892-a140ab4e7dec-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.617242 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxq2r\" (UniqueName: \"kubernetes.io/projected/be2cff80-fb1c-4421-a892-a140ab4e7dec-kube-api-access-zxq2r\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.617257 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be2cff80-fb1c-4421-a892-a140ab4e7dec-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.617266 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be2cff80-fb1c-4421-a892-a140ab4e7dec-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.617274 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be2cff80-fb1c-4421-a892-a140ab4e7dec-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.617284 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be2cff80-fb1c-4421-a892-a140ab4e7dec-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.815926 4858 generic.go:334] "Generic (PLEG): container finished" podID="be2cff80-fb1c-4421-a892-a140ab4e7dec" containerID="1ac1e831913db6c85c069349536f5e09fc41a2e3f94be33d1f755715e18e3c17" exitCode=0 Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.816042 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-rk27f" event={"ID":"be2cff80-fb1c-4421-a892-a140ab4e7dec","Type":"ContainerDied","Data":"1ac1e831913db6c85c069349536f5e09fc41a2e3f94be33d1f755715e18e3c17"} Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.816141 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-rk27f" event={"ID":"be2cff80-fb1c-4421-a892-a140ab4e7dec","Type":"ContainerDied","Data":"27fff9b44a00a80472ed16185619d1e4b86bce297f8817cd92b880ed292cf33a"} Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.816168 4858 scope.go:117] "RemoveContainer" containerID="1ac1e831913db6c85c069349536f5e09fc41a2e3f94be33d1f755715e18e3c17" Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.816205 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-rk27f" Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.818294 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8ckkf" Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.819179 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8ckkf" event={"ID":"449436cd-88ef-480a-9905-8b120f723f8f","Type":"ContainerDied","Data":"7385c48b79a32b3cf51dfd833c82886f6c63a6fa8b68f53a184922d2e3da43e9"} Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.819281 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7385c48b79a32b3cf51dfd833c82886f6c63a6fa8b68f53a184922d2e3da43e9" Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.871346 4858 scope.go:117] "RemoveContainer" containerID="f236f3d73748ec08ad2077af7851ef08099ca6a7584573e89379a5691e3e500c" Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.881923 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-rk27f"] Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.893825 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-rk27f"] Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.989624 4858 scope.go:117] "RemoveContainer" containerID="1ac1e831913db6c85c069349536f5e09fc41a2e3f94be33d1f755715e18e3c17" Feb 02 17:33:32 crc kubenswrapper[4858]: E0202 17:33:32.990213 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ac1e831913db6c85c069349536f5e09fc41a2e3f94be33d1f755715e18e3c17\": container with ID starting with 1ac1e831913db6c85c069349536f5e09fc41a2e3f94be33d1f755715e18e3c17 not found: ID does not exist" containerID="1ac1e831913db6c85c069349536f5e09fc41a2e3f94be33d1f755715e18e3c17" Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.990247 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ac1e831913db6c85c069349536f5e09fc41a2e3f94be33d1f755715e18e3c17"} err="failed to get container status \"1ac1e831913db6c85c069349536f5e09fc41a2e3f94be33d1f755715e18e3c17\": rpc error: code = NotFound desc = could not find container \"1ac1e831913db6c85c069349536f5e09fc41a2e3f94be33d1f755715e18e3c17\": container with ID starting with 1ac1e831913db6c85c069349536f5e09fc41a2e3f94be33d1f755715e18e3c17 not found: ID does not exist" Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.990279 4858 scope.go:117] "RemoveContainer" containerID="f236f3d73748ec08ad2077af7851ef08099ca6a7584573e89379a5691e3e500c" Feb 02 17:33:32 crc kubenswrapper[4858]: E0202 17:33:32.991313 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f236f3d73748ec08ad2077af7851ef08099ca6a7584573e89379a5691e3e500c\": container with ID starting with f236f3d73748ec08ad2077af7851ef08099ca6a7584573e89379a5691e3e500c not found: ID does not exist" containerID="f236f3d73748ec08ad2077af7851ef08099ca6a7584573e89379a5691e3e500c" Feb 02 17:33:32 crc kubenswrapper[4858]: I0202 17:33:32.991338 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f236f3d73748ec08ad2077af7851ef08099ca6a7584573e89379a5691e3e500c"} err="failed to get container status \"f236f3d73748ec08ad2077af7851ef08099ca6a7584573e89379a5691e3e500c\": rpc error: code = NotFound desc = could not find container \"f236f3d73748ec08ad2077af7851ef08099ca6a7584573e89379a5691e3e500c\": container with ID starting with f236f3d73748ec08ad2077af7851ef08099ca6a7584573e89379a5691e3e500c not found: ID does not exist" Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.022199 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.022578 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9d767725-79eb-441b-9042-54ce10f9aa3b" containerName="nova-api-log" containerID="cri-o://4afda9528e1a2235d4f255e48411cdd62301abb2eb401a418311a3b79c1fd0c1" gracePeriod=30 Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.023227 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9d767725-79eb-441b-9042-54ce10f9aa3b" containerName="nova-api-api" containerID="cri-o://166076aed6724411e73559ac8470e24d21f8a9b2fe67a9fd686528677d9d50fb" gracePeriod=30 Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.050121 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.050369 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f8efb6ba-0288-4fff-8399-8d263bcba580" containerName="nova-metadata-log" containerID="cri-o://afb94beb344d3f7913a6c85f621aaf90d02b09df560479d2f8908a23167d9d9f" gracePeriod=30 Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.050614 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f8efb6ba-0288-4fff-8399-8d263bcba580" containerName="nova-metadata-metadata" containerID="cri-o://a1d9fffa0a999c1b5de0e82f7d44b5d1e2ec5da55e08082461ac107b6cfb4c41" gracePeriod=30 Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.073358 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.330500 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-n46db" Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.445654 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9127c71e-926f-4e20-b766-a957645d7dc9-combined-ca-bundle\") pod \"9127c71e-926f-4e20-b766-a957645d7dc9\" (UID: \"9127c71e-926f-4e20-b766-a957645d7dc9\") " Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.450602 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9127c71e-926f-4e20-b766-a957645d7dc9-config-data\") pod \"9127c71e-926f-4e20-b766-a957645d7dc9\" (UID: \"9127c71e-926f-4e20-b766-a957645d7dc9\") " Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.450675 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2wvj\" (UniqueName: \"kubernetes.io/projected/9127c71e-926f-4e20-b766-a957645d7dc9-kube-api-access-x2wvj\") pod \"9127c71e-926f-4e20-b766-a957645d7dc9\" (UID: \"9127c71e-926f-4e20-b766-a957645d7dc9\") " Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.450737 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9127c71e-926f-4e20-b766-a957645d7dc9-scripts\") pod \"9127c71e-926f-4e20-b766-a957645d7dc9\" (UID: \"9127c71e-926f-4e20-b766-a957645d7dc9\") " Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.457019 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9127c71e-926f-4e20-b766-a957645d7dc9-scripts" (OuterVolumeSpecName: "scripts") pod "9127c71e-926f-4e20-b766-a957645d7dc9" (UID: "9127c71e-926f-4e20-b766-a957645d7dc9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.457364 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9127c71e-926f-4e20-b766-a957645d7dc9-kube-api-access-x2wvj" (OuterVolumeSpecName: "kube-api-access-x2wvj") pod "9127c71e-926f-4e20-b766-a957645d7dc9" (UID: "9127c71e-926f-4e20-b766-a957645d7dc9"). InnerVolumeSpecName "kube-api-access-x2wvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.503773 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9127c71e-926f-4e20-b766-a957645d7dc9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9127c71e-926f-4e20-b766-a957645d7dc9" (UID: "9127c71e-926f-4e20-b766-a957645d7dc9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.505605 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9127c71e-926f-4e20-b766-a957645d7dc9-config-data" (OuterVolumeSpecName: "config-data") pod "9127c71e-926f-4e20-b766-a957645d7dc9" (UID: "9127c71e-926f-4e20-b766-a957645d7dc9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.553781 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9127c71e-926f-4e20-b766-a957645d7dc9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.553817 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9127c71e-926f-4e20-b766-a957645d7dc9-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.553826 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2wvj\" (UniqueName: \"kubernetes.io/projected/9127c71e-926f-4e20-b766-a957645d7dc9-kube-api-access-x2wvj\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.553836 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9127c71e-926f-4e20-b766-a957645d7dc9-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.575324 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.655501 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb48v\" (UniqueName: \"kubernetes.io/projected/f8efb6ba-0288-4fff-8399-8d263bcba580-kube-api-access-mb48v\") pod \"f8efb6ba-0288-4fff-8399-8d263bcba580\" (UID: \"f8efb6ba-0288-4fff-8399-8d263bcba580\") " Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.655655 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8efb6ba-0288-4fff-8399-8d263bcba580-nova-metadata-tls-certs\") pod \"f8efb6ba-0288-4fff-8399-8d263bcba580\" (UID: \"f8efb6ba-0288-4fff-8399-8d263bcba580\") " Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.655702 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8efb6ba-0288-4fff-8399-8d263bcba580-config-data\") pod \"f8efb6ba-0288-4fff-8399-8d263bcba580\" (UID: \"f8efb6ba-0288-4fff-8399-8d263bcba580\") " Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.655730 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8efb6ba-0288-4fff-8399-8d263bcba580-logs\") pod \"f8efb6ba-0288-4fff-8399-8d263bcba580\" (UID: \"f8efb6ba-0288-4fff-8399-8d263bcba580\") " Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.655783 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8efb6ba-0288-4fff-8399-8d263bcba580-combined-ca-bundle\") pod \"f8efb6ba-0288-4fff-8399-8d263bcba580\" (UID: \"f8efb6ba-0288-4fff-8399-8d263bcba580\") " Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.656388 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8efb6ba-0288-4fff-8399-8d263bcba580-logs" (OuterVolumeSpecName: "logs") pod "f8efb6ba-0288-4fff-8399-8d263bcba580" (UID: "f8efb6ba-0288-4fff-8399-8d263bcba580"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.659585 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8efb6ba-0288-4fff-8399-8d263bcba580-kube-api-access-mb48v" (OuterVolumeSpecName: "kube-api-access-mb48v") pod "f8efb6ba-0288-4fff-8399-8d263bcba580" (UID: "f8efb6ba-0288-4fff-8399-8d263bcba580"). InnerVolumeSpecName "kube-api-access-mb48v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.683149 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8efb6ba-0288-4fff-8399-8d263bcba580-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f8efb6ba-0288-4fff-8399-8d263bcba580" (UID: "f8efb6ba-0288-4fff-8399-8d263bcba580"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.683172 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8efb6ba-0288-4fff-8399-8d263bcba580-config-data" (OuterVolumeSpecName: "config-data") pod "f8efb6ba-0288-4fff-8399-8d263bcba580" (UID: "f8efb6ba-0288-4fff-8399-8d263bcba580"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.700397 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8efb6ba-0288-4fff-8399-8d263bcba580-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "f8efb6ba-0288-4fff-8399-8d263bcba580" (UID: "f8efb6ba-0288-4fff-8399-8d263bcba580"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.758438 4858 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8efb6ba-0288-4fff-8399-8d263bcba580-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.758486 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8efb6ba-0288-4fff-8399-8d263bcba580-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.758500 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8efb6ba-0288-4fff-8399-8d263bcba580-logs\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.758516 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8efb6ba-0288-4fff-8399-8d263bcba580-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.758528 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb48v\" (UniqueName: \"kubernetes.io/projected/f8efb6ba-0288-4fff-8399-8d263bcba580-kube-api-access-mb48v\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.869144 4858 generic.go:334] "Generic (PLEG): container finished" podID="9d767725-79eb-441b-9042-54ce10f9aa3b" containerID="4afda9528e1a2235d4f255e48411cdd62301abb2eb401a418311a3b79c1fd0c1" exitCode=143 Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.869321 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9d767725-79eb-441b-9042-54ce10f9aa3b","Type":"ContainerDied","Data":"4afda9528e1a2235d4f255e48411cdd62301abb2eb401a418311a3b79c1fd0c1"} Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.889744 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-n46db" Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.890532 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-n46db" event={"ID":"9127c71e-926f-4e20-b766-a957645d7dc9","Type":"ContainerDied","Data":"09eec7b9a0367761b67d472b8d0518d0dcca7d10f3b8158520cf3bed5807b10e"} Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.890578 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09eec7b9a0367761b67d472b8d0518d0dcca7d10f3b8158520cf3bed5807b10e" Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.917671 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.918409 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f8efb6ba-0288-4fff-8399-8d263bcba580","Type":"ContainerDied","Data":"a1d9fffa0a999c1b5de0e82f7d44b5d1e2ec5da55e08082461ac107b6cfb4c41"} Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.918585 4858 scope.go:117] "RemoveContainer" containerID="a1d9fffa0a999c1b5de0e82f7d44b5d1e2ec5da55e08082461ac107b6cfb4c41" Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.917518 4858 generic.go:334] "Generic (PLEG): container finished" podID="f8efb6ba-0288-4fff-8399-8d263bcba580" containerID="a1d9fffa0a999c1b5de0e82f7d44b5d1e2ec5da55e08082461ac107b6cfb4c41" exitCode=0 Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.945274 4858 generic.go:334] "Generic (PLEG): container finished" podID="f8efb6ba-0288-4fff-8399-8d263bcba580" containerID="afb94beb344d3f7913a6c85f621aaf90d02b09df560479d2f8908a23167d9d9f" exitCode=143 Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.945553 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="b7433df4-440b-4a0a-ad8e-f0bdece26bc3" containerName="nova-scheduler-scheduler" containerID="cri-o://bff217f7b8652bd99a3e1e5e9553de6c779f55ebefb49f612aff97fb14586a27" gracePeriod=30 Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.945758 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f8efb6ba-0288-4fff-8399-8d263bcba580","Type":"ContainerDied","Data":"afb94beb344d3f7913a6c85f621aaf90d02b09df560479d2f8908a23167d9d9f"} Feb 02 17:33:33 crc kubenswrapper[4858]: I0202 17:33:33.945806 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f8efb6ba-0288-4fff-8399-8d263bcba580","Type":"ContainerDied","Data":"c749c790a674b59836f2a7ad7b5eee728c5469f0e84a38b908f5e11d2e58075e"} Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.008456 4858 scope.go:117] "RemoveContainer" containerID="afb94beb344d3f7913a6c85f621aaf90d02b09df560479d2f8908a23167d9d9f" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.040045 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 17:33:34 crc kubenswrapper[4858]: E0202 17:33:34.040540 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be2cff80-fb1c-4421-a892-a140ab4e7dec" containerName="dnsmasq-dns" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.040566 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="be2cff80-fb1c-4421-a892-a140ab4e7dec" containerName="dnsmasq-dns" Feb 02 17:33:34 crc kubenswrapper[4858]: E0202 17:33:34.040594 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8efb6ba-0288-4fff-8399-8d263bcba580" containerName="nova-metadata-metadata" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.040602 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8efb6ba-0288-4fff-8399-8d263bcba580" containerName="nova-metadata-metadata" Feb 02 17:33:34 crc kubenswrapper[4858]: E0202 17:33:34.040627 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8efb6ba-0288-4fff-8399-8d263bcba580" containerName="nova-metadata-log" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.040633 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8efb6ba-0288-4fff-8399-8d263bcba580" containerName="nova-metadata-log" Feb 02 17:33:34 crc kubenswrapper[4858]: E0202 17:33:34.040644 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be2cff80-fb1c-4421-a892-a140ab4e7dec" containerName="init" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.040650 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="be2cff80-fb1c-4421-a892-a140ab4e7dec" containerName="init" Feb 02 17:33:34 crc kubenswrapper[4858]: E0202 17:33:34.040657 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9127c71e-926f-4e20-b766-a957645d7dc9" containerName="nova-cell1-conductor-db-sync" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.040663 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9127c71e-926f-4e20-b766-a957645d7dc9" containerName="nova-cell1-conductor-db-sync" Feb 02 17:33:34 crc kubenswrapper[4858]: E0202 17:33:34.040677 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="449436cd-88ef-480a-9905-8b120f723f8f" containerName="nova-manage" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.040683 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="449436cd-88ef-480a-9905-8b120f723f8f" containerName="nova-manage" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.040872 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8efb6ba-0288-4fff-8399-8d263bcba580" containerName="nova-metadata-log" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.040891 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="be2cff80-fb1c-4421-a892-a140ab4e7dec" containerName="dnsmasq-dns" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.040904 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8efb6ba-0288-4fff-8399-8d263bcba580" containerName="nova-metadata-metadata" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.040918 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="449436cd-88ef-480a-9905-8b120f723f8f" containerName="nova-manage" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.040931 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9127c71e-926f-4e20-b766-a957645d7dc9" containerName="nova-cell1-conductor-db-sync" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.041599 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.044862 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.052437 4858 scope.go:117] "RemoveContainer" containerID="a1d9fffa0a999c1b5de0e82f7d44b5d1e2ec5da55e08082461ac107b6cfb4c41" Feb 02 17:33:34 crc kubenswrapper[4858]: E0202 17:33:34.056696 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1d9fffa0a999c1b5de0e82f7d44b5d1e2ec5da55e08082461ac107b6cfb4c41\": container with ID starting with a1d9fffa0a999c1b5de0e82f7d44b5d1e2ec5da55e08082461ac107b6cfb4c41 not found: ID does not exist" containerID="a1d9fffa0a999c1b5de0e82f7d44b5d1e2ec5da55e08082461ac107b6cfb4c41" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.056833 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1d9fffa0a999c1b5de0e82f7d44b5d1e2ec5da55e08082461ac107b6cfb4c41"} err="failed to get container status \"a1d9fffa0a999c1b5de0e82f7d44b5d1e2ec5da55e08082461ac107b6cfb4c41\": rpc error: code = NotFound desc = could not find container \"a1d9fffa0a999c1b5de0e82f7d44b5d1e2ec5da55e08082461ac107b6cfb4c41\": container with ID starting with a1d9fffa0a999c1b5de0e82f7d44b5d1e2ec5da55e08082461ac107b6cfb4c41 not found: ID does not exist" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.056923 4858 scope.go:117] "RemoveContainer" containerID="afb94beb344d3f7913a6c85f621aaf90d02b09df560479d2f8908a23167d9d9f" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.057944 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 17:33:34 crc kubenswrapper[4858]: E0202 17:33:34.058184 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afb94beb344d3f7913a6c85f621aaf90d02b09df560479d2f8908a23167d9d9f\": container with ID starting with afb94beb344d3f7913a6c85f621aaf90d02b09df560479d2f8908a23167d9d9f not found: ID does not exist" containerID="afb94beb344d3f7913a6c85f621aaf90d02b09df560479d2f8908a23167d9d9f" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.085275 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afb94beb344d3f7913a6c85f621aaf90d02b09df560479d2f8908a23167d9d9f"} err="failed to get container status \"afb94beb344d3f7913a6c85f621aaf90d02b09df560479d2f8908a23167d9d9f\": rpc error: code = NotFound desc = could not find container \"afb94beb344d3f7913a6c85f621aaf90d02b09df560479d2f8908a23167d9d9f\": container with ID starting with afb94beb344d3f7913a6c85f621aaf90d02b09df560479d2f8908a23167d9d9f not found: ID does not exist" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.085311 4858 scope.go:117] "RemoveContainer" containerID="a1d9fffa0a999c1b5de0e82f7d44b5d1e2ec5da55e08082461ac107b6cfb4c41" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.092533 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1d9fffa0a999c1b5de0e82f7d44b5d1e2ec5da55e08082461ac107b6cfb4c41"} err="failed to get container status \"a1d9fffa0a999c1b5de0e82f7d44b5d1e2ec5da55e08082461ac107b6cfb4c41\": rpc error: code = NotFound desc = could not find container \"a1d9fffa0a999c1b5de0e82f7d44b5d1e2ec5da55e08082461ac107b6cfb4c41\": container with ID starting with a1d9fffa0a999c1b5de0e82f7d44b5d1e2ec5da55e08082461ac107b6cfb4c41 not found: ID does not exist" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.092803 4858 scope.go:117] "RemoveContainer" containerID="afb94beb344d3f7913a6c85f621aaf90d02b09df560479d2f8908a23167d9d9f" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.093473 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afb94beb344d3f7913a6c85f621aaf90d02b09df560479d2f8908a23167d9d9f"} err="failed to get container status \"afb94beb344d3f7913a6c85f621aaf90d02b09df560479d2f8908a23167d9d9f\": rpc error: code = NotFound desc = could not find container \"afb94beb344d3f7913a6c85f621aaf90d02b09df560479d2f8908a23167d9d9f\": container with ID starting with afb94beb344d3f7913a6c85f621aaf90d02b09df560479d2f8908a23167d9d9f not found: ID does not exist" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.105131 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.129032 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.158729 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.160430 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.162549 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.163185 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.168561 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.169516 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/846f9c74-1b28-40d3-b2f9-ed7b380fa34f-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"846f9c74-1b28-40d3-b2f9-ed7b380fa34f\") " pod="openstack/nova-cell1-conductor-0" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.169626 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/846f9c74-1b28-40d3-b2f9-ed7b380fa34f-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"846f9c74-1b28-40d3-b2f9-ed7b380fa34f\") " pod="openstack/nova-cell1-conductor-0" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.169661 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbx2s\" (UniqueName: \"kubernetes.io/projected/846f9c74-1b28-40d3-b2f9-ed7b380fa34f-kube-api-access-qbx2s\") pod \"nova-cell1-conductor-0\" (UID: \"846f9c74-1b28-40d3-b2f9-ed7b380fa34f\") " pod="openstack/nova-cell1-conductor-0" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.271466 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab13622-4d61-4621-b865-45ede50fcaeb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3ab13622-4d61-4621-b865-45ede50fcaeb\") " pod="openstack/nova-metadata-0" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.271570 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpgdq\" (UniqueName: \"kubernetes.io/projected/3ab13622-4d61-4621-b865-45ede50fcaeb-kube-api-access-qpgdq\") pod \"nova-metadata-0\" (UID: \"3ab13622-4d61-4621-b865-45ede50fcaeb\") " pod="openstack/nova-metadata-0" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.271659 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/846f9c74-1b28-40d3-b2f9-ed7b380fa34f-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"846f9c74-1b28-40d3-b2f9-ed7b380fa34f\") " pod="openstack/nova-cell1-conductor-0" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.271725 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ab13622-4d61-4621-b865-45ede50fcaeb-config-data\") pod \"nova-metadata-0\" (UID: \"3ab13622-4d61-4621-b865-45ede50fcaeb\") " pod="openstack/nova-metadata-0" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.271762 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ab13622-4d61-4621-b865-45ede50fcaeb-logs\") pod \"nova-metadata-0\" (UID: \"3ab13622-4d61-4621-b865-45ede50fcaeb\") " pod="openstack/nova-metadata-0" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.271868 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/846f9c74-1b28-40d3-b2f9-ed7b380fa34f-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"846f9c74-1b28-40d3-b2f9-ed7b380fa34f\") " pod="openstack/nova-cell1-conductor-0" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.271906 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbx2s\" (UniqueName: \"kubernetes.io/projected/846f9c74-1b28-40d3-b2f9-ed7b380fa34f-kube-api-access-qbx2s\") pod \"nova-cell1-conductor-0\" (UID: \"846f9c74-1b28-40d3-b2f9-ed7b380fa34f\") " pod="openstack/nova-cell1-conductor-0" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.271952 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab13622-4d61-4621-b865-45ede50fcaeb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3ab13622-4d61-4621-b865-45ede50fcaeb\") " pod="openstack/nova-metadata-0" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.276380 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/846f9c74-1b28-40d3-b2f9-ed7b380fa34f-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"846f9c74-1b28-40d3-b2f9-ed7b380fa34f\") " pod="openstack/nova-cell1-conductor-0" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.276418 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/846f9c74-1b28-40d3-b2f9-ed7b380fa34f-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"846f9c74-1b28-40d3-b2f9-ed7b380fa34f\") " pod="openstack/nova-cell1-conductor-0" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.288718 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbx2s\" (UniqueName: \"kubernetes.io/projected/846f9c74-1b28-40d3-b2f9-ed7b380fa34f-kube-api-access-qbx2s\") pod \"nova-cell1-conductor-0\" (UID: \"846f9c74-1b28-40d3-b2f9-ed7b380fa34f\") " pod="openstack/nova-cell1-conductor-0" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.373357 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpgdq\" (UniqueName: \"kubernetes.io/projected/3ab13622-4d61-4621-b865-45ede50fcaeb-kube-api-access-qpgdq\") pod \"nova-metadata-0\" (UID: \"3ab13622-4d61-4621-b865-45ede50fcaeb\") " pod="openstack/nova-metadata-0" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.373505 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ab13622-4d61-4621-b865-45ede50fcaeb-config-data\") pod \"nova-metadata-0\" (UID: \"3ab13622-4d61-4621-b865-45ede50fcaeb\") " pod="openstack/nova-metadata-0" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.373541 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ab13622-4d61-4621-b865-45ede50fcaeb-logs\") pod \"nova-metadata-0\" (UID: \"3ab13622-4d61-4621-b865-45ede50fcaeb\") " pod="openstack/nova-metadata-0" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.373615 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab13622-4d61-4621-b865-45ede50fcaeb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3ab13622-4d61-4621-b865-45ede50fcaeb\") " pod="openstack/nova-metadata-0" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.373663 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab13622-4d61-4621-b865-45ede50fcaeb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3ab13622-4d61-4621-b865-45ede50fcaeb\") " pod="openstack/nova-metadata-0" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.375413 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ab13622-4d61-4621-b865-45ede50fcaeb-logs\") pod \"nova-metadata-0\" (UID: \"3ab13622-4d61-4621-b865-45ede50fcaeb\") " pod="openstack/nova-metadata-0" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.375538 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.377334 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab13622-4d61-4621-b865-45ede50fcaeb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3ab13622-4d61-4621-b865-45ede50fcaeb\") " pod="openstack/nova-metadata-0" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.378835 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ab13622-4d61-4621-b865-45ede50fcaeb-config-data\") pod \"nova-metadata-0\" (UID: \"3ab13622-4d61-4621-b865-45ede50fcaeb\") " pod="openstack/nova-metadata-0" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.384418 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab13622-4d61-4621-b865-45ede50fcaeb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3ab13622-4d61-4621-b865-45ede50fcaeb\") " pod="openstack/nova-metadata-0" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.394633 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpgdq\" (UniqueName: \"kubernetes.io/projected/3ab13622-4d61-4621-b865-45ede50fcaeb-kube-api-access-qpgdq\") pod \"nova-metadata-0\" (UID: \"3ab13622-4d61-4621-b865-45ede50fcaeb\") " pod="openstack/nova-metadata-0" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.435269 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be2cff80-fb1c-4421-a892-a140ab4e7dec" path="/var/lib/kubelet/pods/be2cff80-fb1c-4421-a892-a140ab4e7dec/volumes" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.440167 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8efb6ba-0288-4fff-8399-8d263bcba580" path="/var/lib/kubelet/pods/f8efb6ba-0288-4fff-8399-8d263bcba580/volumes" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.482084 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.865872 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.958390 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"846f9c74-1b28-40d3-b2f9-ed7b380fa34f","Type":"ContainerStarted","Data":"4162144f0608a1e482522ff379d65e82c0e97d91eb33d562e776be1a4163e782"} Feb 02 17:33:34 crc kubenswrapper[4858]: I0202 17:33:34.965894 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 17:33:34 crc kubenswrapper[4858]: W0202 17:33:34.969624 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ab13622_4d61_4621_b865_45ede50fcaeb.slice/crio-17f784dd9e255dab5d38857ab55d0ebc99ba6b4b8c363401a3ef7f50999758bc WatchSource:0}: Error finding container 17f784dd9e255dab5d38857ab55d0ebc99ba6b4b8c363401a3ef7f50999758bc: Status 404 returned error can't find the container with id 17f784dd9e255dab5d38857ab55d0ebc99ba6b4b8c363401a3ef7f50999758bc Feb 02 17:33:35 crc kubenswrapper[4858]: I0202 17:33:35.982345 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"846f9c74-1b28-40d3-b2f9-ed7b380fa34f","Type":"ContainerStarted","Data":"bef46cfad0815cb00367afe98869f9f09429551e9ca0700a2120a8711b2aed4f"} Feb 02 17:33:35 crc kubenswrapper[4858]: I0202 17:33:35.983164 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 02 17:33:35 crc kubenswrapper[4858]: I0202 17:33:35.987214 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3ab13622-4d61-4621-b865-45ede50fcaeb","Type":"ContainerStarted","Data":"448bcbb4084d67ca18b597fcaa91e184905eb4a03ebf3a5ae6a82dd3252c3092"} Feb 02 17:33:35 crc kubenswrapper[4858]: I0202 17:33:35.987480 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3ab13622-4d61-4621-b865-45ede50fcaeb","Type":"ContainerStarted","Data":"a45ca9c3436d4e9a318561a0f4518b985cf1546a2c8e186de80f2e05bab94b59"} Feb 02 17:33:35 crc kubenswrapper[4858]: I0202 17:33:35.987616 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3ab13622-4d61-4621-b865-45ede50fcaeb","Type":"ContainerStarted","Data":"17f784dd9e255dab5d38857ab55d0ebc99ba6b4b8c363401a3ef7f50999758bc"} Feb 02 17:33:36 crc kubenswrapper[4858]: I0202 17:33:36.008630 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.008608738 podStartE2EDuration="3.008608738s" podCreationTimestamp="2026-02-02 17:33:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:33:36.003358429 +0000 UTC m=+1117.155773724" watchObservedRunningTime="2026-02-02 17:33:36.008608738 +0000 UTC m=+1117.161024003" Feb 02 17:33:36 crc kubenswrapper[4858]: I0202 17:33:36.040622 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.040594503 podStartE2EDuration="2.040594503s" podCreationTimestamp="2026-02-02 17:33:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:33:36.030691733 +0000 UTC m=+1117.183106998" watchObservedRunningTime="2026-02-02 17:33:36.040594503 +0000 UTC m=+1117.193009808" Feb 02 17:33:36 crc kubenswrapper[4858]: E0202 17:33:36.436037 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bff217f7b8652bd99a3e1e5e9553de6c779f55ebefb49f612aff97fb14586a27" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 17:33:36 crc kubenswrapper[4858]: E0202 17:33:36.438109 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bff217f7b8652bd99a3e1e5e9553de6c779f55ebefb49f612aff97fb14586a27" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 17:33:36 crc kubenswrapper[4858]: E0202 17:33:36.439986 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bff217f7b8652bd99a3e1e5e9553de6c779f55ebefb49f612aff97fb14586a27" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 17:33:36 crc kubenswrapper[4858]: E0202 17:33:36.440022 4858 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="b7433df4-440b-4a0a-ad8e-f0bdece26bc3" containerName="nova-scheduler-scheduler" Feb 02 17:33:37 crc kubenswrapper[4858]: I0202 17:33:37.872715 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 17:33:37 crc kubenswrapper[4858]: I0202 17:33:37.965698 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d767725-79eb-441b-9042-54ce10f9aa3b-logs\") pod \"9d767725-79eb-441b-9042-54ce10f9aa3b\" (UID: \"9d767725-79eb-441b-9042-54ce10f9aa3b\") " Feb 02 17:33:37 crc kubenswrapper[4858]: I0202 17:33:37.965844 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkmjc\" (UniqueName: \"kubernetes.io/projected/9d767725-79eb-441b-9042-54ce10f9aa3b-kube-api-access-pkmjc\") pod \"9d767725-79eb-441b-9042-54ce10f9aa3b\" (UID: \"9d767725-79eb-441b-9042-54ce10f9aa3b\") " Feb 02 17:33:37 crc kubenswrapper[4858]: I0202 17:33:37.966280 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d767725-79eb-441b-9042-54ce10f9aa3b-logs" (OuterVolumeSpecName: "logs") pod "9d767725-79eb-441b-9042-54ce10f9aa3b" (UID: "9d767725-79eb-441b-9042-54ce10f9aa3b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:33:37 crc kubenswrapper[4858]: I0202 17:33:37.967155 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d767725-79eb-441b-9042-54ce10f9aa3b-combined-ca-bundle\") pod \"9d767725-79eb-441b-9042-54ce10f9aa3b\" (UID: \"9d767725-79eb-441b-9042-54ce10f9aa3b\") " Feb 02 17:33:37 crc kubenswrapper[4858]: I0202 17:33:37.967416 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d767725-79eb-441b-9042-54ce10f9aa3b-config-data\") pod \"9d767725-79eb-441b-9042-54ce10f9aa3b\" (UID: \"9d767725-79eb-441b-9042-54ce10f9aa3b\") " Feb 02 17:33:37 crc kubenswrapper[4858]: I0202 17:33:37.968526 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d767725-79eb-441b-9042-54ce10f9aa3b-logs\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:37 crc kubenswrapper[4858]: I0202 17:33:37.972072 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d767725-79eb-441b-9042-54ce10f9aa3b-kube-api-access-pkmjc" (OuterVolumeSpecName: "kube-api-access-pkmjc") pod "9d767725-79eb-441b-9042-54ce10f9aa3b" (UID: "9d767725-79eb-441b-9042-54ce10f9aa3b"). InnerVolumeSpecName "kube-api-access-pkmjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:33:37 crc kubenswrapper[4858]: I0202 17:33:37.999054 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d767725-79eb-441b-9042-54ce10f9aa3b-config-data" (OuterVolumeSpecName: "config-data") pod "9d767725-79eb-441b-9042-54ce10f9aa3b" (UID: "9d767725-79eb-441b-9042-54ce10f9aa3b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.003039 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d767725-79eb-441b-9042-54ce10f9aa3b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9d767725-79eb-441b-9042-54ce10f9aa3b" (UID: "9d767725-79eb-441b-9042-54ce10f9aa3b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.018710 4858 generic.go:334] "Generic (PLEG): container finished" podID="b7433df4-440b-4a0a-ad8e-f0bdece26bc3" containerID="bff217f7b8652bd99a3e1e5e9553de6c779f55ebefb49f612aff97fb14586a27" exitCode=0 Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.018777 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b7433df4-440b-4a0a-ad8e-f0bdece26bc3","Type":"ContainerDied","Data":"bff217f7b8652bd99a3e1e5e9553de6c779f55ebefb49f612aff97fb14586a27"} Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.021395 4858 generic.go:334] "Generic (PLEG): container finished" podID="9d767725-79eb-441b-9042-54ce10f9aa3b" containerID="166076aed6724411e73559ac8470e24d21f8a9b2fe67a9fd686528677d9d50fb" exitCode=0 Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.021571 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9d767725-79eb-441b-9042-54ce10f9aa3b","Type":"ContainerDied","Data":"166076aed6724411e73559ac8470e24d21f8a9b2fe67a9fd686528677d9d50fb"} Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.021747 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9d767725-79eb-441b-9042-54ce10f9aa3b","Type":"ContainerDied","Data":"6b9f872bfb05311d299602fe85b0da1b32a5bc5fb46721815ab1d76bd62da6ab"} Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.021847 4858 scope.go:117] "RemoveContainer" containerID="166076aed6724411e73559ac8470e24d21f8a9b2fe67a9fd686528677d9d50fb" Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.021608 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.070793 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkmjc\" (UniqueName: \"kubernetes.io/projected/9d767725-79eb-441b-9042-54ce10f9aa3b-kube-api-access-pkmjc\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.070828 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d767725-79eb-441b-9042-54ce10f9aa3b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.070860 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d767725-79eb-441b-9042-54ce10f9aa3b-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.095413 4858 scope.go:117] "RemoveContainer" containerID="4afda9528e1a2235d4f255e48411cdd62301abb2eb401a418311a3b79c1fd0c1" Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.105318 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.128809 4858 scope.go:117] "RemoveContainer" containerID="166076aed6724411e73559ac8470e24d21f8a9b2fe67a9fd686528677d9d50fb" Feb 02 17:33:38 crc kubenswrapper[4858]: E0202 17:33:38.133072 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"166076aed6724411e73559ac8470e24d21f8a9b2fe67a9fd686528677d9d50fb\": container with ID starting with 166076aed6724411e73559ac8470e24d21f8a9b2fe67a9fd686528677d9d50fb not found: ID does not exist" containerID="166076aed6724411e73559ac8470e24d21f8a9b2fe67a9fd686528677d9d50fb" Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.133133 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"166076aed6724411e73559ac8470e24d21f8a9b2fe67a9fd686528677d9d50fb"} err="failed to get container status \"166076aed6724411e73559ac8470e24d21f8a9b2fe67a9fd686528677d9d50fb\": rpc error: code = NotFound desc = could not find container \"166076aed6724411e73559ac8470e24d21f8a9b2fe67a9fd686528677d9d50fb\": container with ID starting with 166076aed6724411e73559ac8470e24d21f8a9b2fe67a9fd686528677d9d50fb not found: ID does not exist" Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.133162 4858 scope.go:117] "RemoveContainer" containerID="4afda9528e1a2235d4f255e48411cdd62301abb2eb401a418311a3b79c1fd0c1" Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.133256 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 02 17:33:38 crc kubenswrapper[4858]: E0202 17:33:38.133813 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4afda9528e1a2235d4f255e48411cdd62301abb2eb401a418311a3b79c1fd0c1\": container with ID starting with 4afda9528e1a2235d4f255e48411cdd62301abb2eb401a418311a3b79c1fd0c1 not found: ID does not exist" containerID="4afda9528e1a2235d4f255e48411cdd62301abb2eb401a418311a3b79c1fd0c1" Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.133843 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4afda9528e1a2235d4f255e48411cdd62301abb2eb401a418311a3b79c1fd0c1"} err="failed to get container status \"4afda9528e1a2235d4f255e48411cdd62301abb2eb401a418311a3b79c1fd0c1\": rpc error: code = NotFound desc = could not find container \"4afda9528e1a2235d4f255e48411cdd62301abb2eb401a418311a3b79c1fd0c1\": container with ID starting with 4afda9528e1a2235d4f255e48411cdd62301abb2eb401a418311a3b79c1fd0c1 not found: ID does not exist" Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.147207 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 17:33:38 crc kubenswrapper[4858]: E0202 17:33:38.147665 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d767725-79eb-441b-9042-54ce10f9aa3b" containerName="nova-api-log" Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.147680 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d767725-79eb-441b-9042-54ce10f9aa3b" containerName="nova-api-log" Feb 02 17:33:38 crc kubenswrapper[4858]: E0202 17:33:38.147695 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d767725-79eb-441b-9042-54ce10f9aa3b" containerName="nova-api-api" Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.147701 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d767725-79eb-441b-9042-54ce10f9aa3b" containerName="nova-api-api" Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.147905 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d767725-79eb-441b-9042-54ce10f9aa3b" containerName="nova-api-log" Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.147930 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d767725-79eb-441b-9042-54ce10f9aa3b" containerName="nova-api-api" Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.162572 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.167135 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.167221 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.232052 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.282917 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79296b02-fa25-434c-ab5a-6ebb9a116de2-logs\") pod \"nova-api-0\" (UID: \"79296b02-fa25-434c-ab5a-6ebb9a116de2\") " pod="openstack/nova-api-0" Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.283002 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79296b02-fa25-434c-ab5a-6ebb9a116de2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"79296b02-fa25-434c-ab5a-6ebb9a116de2\") " pod="openstack/nova-api-0" Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.283067 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79296b02-fa25-434c-ab5a-6ebb9a116de2-config-data\") pod \"nova-api-0\" (UID: \"79296b02-fa25-434c-ab5a-6ebb9a116de2\") " pod="openstack/nova-api-0" Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.283090 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm7rn\" (UniqueName: \"kubernetes.io/projected/79296b02-fa25-434c-ab5a-6ebb9a116de2-kube-api-access-wm7rn\") pod \"nova-api-0\" (UID: \"79296b02-fa25-434c-ab5a-6ebb9a116de2\") " pod="openstack/nova-api-0" Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.384531 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7433df4-440b-4a0a-ad8e-f0bdece26bc3-combined-ca-bundle\") pod \"b7433df4-440b-4a0a-ad8e-f0bdece26bc3\" (UID: \"b7433df4-440b-4a0a-ad8e-f0bdece26bc3\") " Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.384613 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlr4l\" (UniqueName: \"kubernetes.io/projected/b7433df4-440b-4a0a-ad8e-f0bdece26bc3-kube-api-access-vlr4l\") pod \"b7433df4-440b-4a0a-ad8e-f0bdece26bc3\" (UID: \"b7433df4-440b-4a0a-ad8e-f0bdece26bc3\") " Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.384768 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7433df4-440b-4a0a-ad8e-f0bdece26bc3-config-data\") pod \"b7433df4-440b-4a0a-ad8e-f0bdece26bc3\" (UID: \"b7433df4-440b-4a0a-ad8e-f0bdece26bc3\") " Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.385187 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79296b02-fa25-434c-ab5a-6ebb9a116de2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"79296b02-fa25-434c-ab5a-6ebb9a116de2\") " pod="openstack/nova-api-0" Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.385321 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79296b02-fa25-434c-ab5a-6ebb9a116de2-config-data\") pod \"nova-api-0\" (UID: \"79296b02-fa25-434c-ab5a-6ebb9a116de2\") " pod="openstack/nova-api-0" Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.385366 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm7rn\" (UniqueName: \"kubernetes.io/projected/79296b02-fa25-434c-ab5a-6ebb9a116de2-kube-api-access-wm7rn\") pod \"nova-api-0\" (UID: \"79296b02-fa25-434c-ab5a-6ebb9a116de2\") " pod="openstack/nova-api-0" Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.385506 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79296b02-fa25-434c-ab5a-6ebb9a116de2-logs\") pod \"nova-api-0\" (UID: \"79296b02-fa25-434c-ab5a-6ebb9a116de2\") " pod="openstack/nova-api-0" Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.386300 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79296b02-fa25-434c-ab5a-6ebb9a116de2-logs\") pod \"nova-api-0\" (UID: \"79296b02-fa25-434c-ab5a-6ebb9a116de2\") " pod="openstack/nova-api-0" Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.389498 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79296b02-fa25-434c-ab5a-6ebb9a116de2-config-data\") pod \"nova-api-0\" (UID: \"79296b02-fa25-434c-ab5a-6ebb9a116de2\") " pod="openstack/nova-api-0" Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.389923 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7433df4-440b-4a0a-ad8e-f0bdece26bc3-kube-api-access-vlr4l" (OuterVolumeSpecName: "kube-api-access-vlr4l") pod "b7433df4-440b-4a0a-ad8e-f0bdece26bc3" (UID: "b7433df4-440b-4a0a-ad8e-f0bdece26bc3"). InnerVolumeSpecName "kube-api-access-vlr4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.390652 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79296b02-fa25-434c-ab5a-6ebb9a116de2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"79296b02-fa25-434c-ab5a-6ebb9a116de2\") " pod="openstack/nova-api-0" Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.402375 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm7rn\" (UniqueName: \"kubernetes.io/projected/79296b02-fa25-434c-ab5a-6ebb9a116de2-kube-api-access-wm7rn\") pod \"nova-api-0\" (UID: \"79296b02-fa25-434c-ab5a-6ebb9a116de2\") " pod="openstack/nova-api-0" Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.412586 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7433df4-440b-4a0a-ad8e-f0bdece26bc3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b7433df4-440b-4a0a-ad8e-f0bdece26bc3" (UID: "b7433df4-440b-4a0a-ad8e-f0bdece26bc3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.413213 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d767725-79eb-441b-9042-54ce10f9aa3b" path="/var/lib/kubelet/pods/9d767725-79eb-441b-9042-54ce10f9aa3b/volumes" Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.426647 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7433df4-440b-4a0a-ad8e-f0bdece26bc3-config-data" (OuterVolumeSpecName: "config-data") pod "b7433df4-440b-4a0a-ad8e-f0bdece26bc3" (UID: "b7433df4-440b-4a0a-ad8e-f0bdece26bc3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.487176 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7433df4-440b-4a0a-ad8e-f0bdece26bc3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.487326 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlr4l\" (UniqueName: \"kubernetes.io/projected/b7433df4-440b-4a0a-ad8e-f0bdece26bc3-kube-api-access-vlr4l\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.487413 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7433df4-440b-4a0a-ad8e-f0bdece26bc3-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:38 crc kubenswrapper[4858]: I0202 17:33:38.490466 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 17:33:39 crc kubenswrapper[4858]: I0202 17:33:39.006927 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 17:33:39 crc kubenswrapper[4858]: W0202 17:33:39.008298 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79296b02_fa25_434c_ab5a_6ebb9a116de2.slice/crio-e5d5e303223044cd6a857f8f95ffd0c5f1d3729ed6f9a62a39b9884531b4bb5b WatchSource:0}: Error finding container e5d5e303223044cd6a857f8f95ffd0c5f1d3729ed6f9a62a39b9884531b4bb5b: Status 404 returned error can't find the container with id e5d5e303223044cd6a857f8f95ffd0c5f1d3729ed6f9a62a39b9884531b4bb5b Feb 02 17:33:39 crc kubenswrapper[4858]: I0202 17:33:39.038416 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b7433df4-440b-4a0a-ad8e-f0bdece26bc3","Type":"ContainerDied","Data":"24378d50a94dd57cbe1e429fc47edb8663c5874e449fcc465da01bab97a1348a"} Feb 02 17:33:39 crc kubenswrapper[4858]: I0202 17:33:39.038501 4858 scope.go:117] "RemoveContainer" containerID="bff217f7b8652bd99a3e1e5e9553de6c779f55ebefb49f612aff97fb14586a27" Feb 02 17:33:39 crc kubenswrapper[4858]: I0202 17:33:39.038638 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 17:33:39 crc kubenswrapper[4858]: I0202 17:33:39.043667 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"79296b02-fa25-434c-ab5a-6ebb9a116de2","Type":"ContainerStarted","Data":"e5d5e303223044cd6a857f8f95ffd0c5f1d3729ed6f9a62a39b9884531b4bb5b"} Feb 02 17:33:39 crc kubenswrapper[4858]: I0202 17:33:39.103236 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 17:33:39 crc kubenswrapper[4858]: I0202 17:33:39.109087 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 17:33:39 crc kubenswrapper[4858]: I0202 17:33:39.117898 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 17:33:39 crc kubenswrapper[4858]: E0202 17:33:39.118458 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7433df4-440b-4a0a-ad8e-f0bdece26bc3" containerName="nova-scheduler-scheduler" Feb 02 17:33:39 crc kubenswrapper[4858]: I0202 17:33:39.118483 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7433df4-440b-4a0a-ad8e-f0bdece26bc3" containerName="nova-scheduler-scheduler" Feb 02 17:33:39 crc kubenswrapper[4858]: I0202 17:33:39.118726 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7433df4-440b-4a0a-ad8e-f0bdece26bc3" containerName="nova-scheduler-scheduler" Feb 02 17:33:39 crc kubenswrapper[4858]: I0202 17:33:39.121552 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 17:33:39 crc kubenswrapper[4858]: I0202 17:33:39.123645 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 02 17:33:39 crc kubenswrapper[4858]: I0202 17:33:39.126027 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 17:33:39 crc kubenswrapper[4858]: I0202 17:33:39.308931 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ch9s\" (UniqueName: \"kubernetes.io/projected/53aea7b5-fb90-427a-b245-2c49de3d9ca4-kube-api-access-9ch9s\") pod \"nova-scheduler-0\" (UID: \"53aea7b5-fb90-427a-b245-2c49de3d9ca4\") " pod="openstack/nova-scheduler-0" Feb 02 17:33:39 crc kubenswrapper[4858]: I0202 17:33:39.309267 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53aea7b5-fb90-427a-b245-2c49de3d9ca4-config-data\") pod \"nova-scheduler-0\" (UID: \"53aea7b5-fb90-427a-b245-2c49de3d9ca4\") " pod="openstack/nova-scheduler-0" Feb 02 17:33:39 crc kubenswrapper[4858]: I0202 17:33:39.309291 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53aea7b5-fb90-427a-b245-2c49de3d9ca4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"53aea7b5-fb90-427a-b245-2c49de3d9ca4\") " pod="openstack/nova-scheduler-0" Feb 02 17:33:39 crc kubenswrapper[4858]: I0202 17:33:39.411273 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ch9s\" (UniqueName: \"kubernetes.io/projected/53aea7b5-fb90-427a-b245-2c49de3d9ca4-kube-api-access-9ch9s\") pod \"nova-scheduler-0\" (UID: \"53aea7b5-fb90-427a-b245-2c49de3d9ca4\") " pod="openstack/nova-scheduler-0" Feb 02 17:33:39 crc kubenswrapper[4858]: I0202 17:33:39.411331 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53aea7b5-fb90-427a-b245-2c49de3d9ca4-config-data\") pod \"nova-scheduler-0\" (UID: \"53aea7b5-fb90-427a-b245-2c49de3d9ca4\") " pod="openstack/nova-scheduler-0" Feb 02 17:33:39 crc kubenswrapper[4858]: I0202 17:33:39.411358 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53aea7b5-fb90-427a-b245-2c49de3d9ca4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"53aea7b5-fb90-427a-b245-2c49de3d9ca4\") " pod="openstack/nova-scheduler-0" Feb 02 17:33:39 crc kubenswrapper[4858]: I0202 17:33:39.415095 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53aea7b5-fb90-427a-b245-2c49de3d9ca4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"53aea7b5-fb90-427a-b245-2c49de3d9ca4\") " pod="openstack/nova-scheduler-0" Feb 02 17:33:39 crc kubenswrapper[4858]: I0202 17:33:39.415448 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53aea7b5-fb90-427a-b245-2c49de3d9ca4-config-data\") pod \"nova-scheduler-0\" (UID: \"53aea7b5-fb90-427a-b245-2c49de3d9ca4\") " pod="openstack/nova-scheduler-0" Feb 02 17:33:39 crc kubenswrapper[4858]: I0202 17:33:39.430791 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ch9s\" (UniqueName: \"kubernetes.io/projected/53aea7b5-fb90-427a-b245-2c49de3d9ca4-kube-api-access-9ch9s\") pod \"nova-scheduler-0\" (UID: \"53aea7b5-fb90-427a-b245-2c49de3d9ca4\") " pod="openstack/nova-scheduler-0" Feb 02 17:33:39 crc kubenswrapper[4858]: I0202 17:33:39.473510 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 17:33:39 crc kubenswrapper[4858]: I0202 17:33:39.483473 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 17:33:39 crc kubenswrapper[4858]: I0202 17:33:39.483536 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 17:33:39 crc kubenswrapper[4858]: I0202 17:33:39.900483 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 17:33:40 crc kubenswrapper[4858]: I0202 17:33:40.056805 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"53aea7b5-fb90-427a-b245-2c49de3d9ca4","Type":"ContainerStarted","Data":"e6c0a9184e65b32c8ad985698375fc19933627063ebc413e19bd64668cb24193"} Feb 02 17:33:40 crc kubenswrapper[4858]: I0202 17:33:40.059588 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"79296b02-fa25-434c-ab5a-6ebb9a116de2","Type":"ContainerStarted","Data":"b11884c5550ac9f923693ddd95c3ee6d4e6a6f50240f72c72f01ceddde9ef4cb"} Feb 02 17:33:40 crc kubenswrapper[4858]: I0202 17:33:40.059614 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"79296b02-fa25-434c-ab5a-6ebb9a116de2","Type":"ContainerStarted","Data":"ca2030dde18fb80d4d7314dc521f46b6ee88aa6f5114216bd3c52f1b73b4c293"} Feb 02 17:33:40 crc kubenswrapper[4858]: I0202 17:33:40.094083 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.094058032 podStartE2EDuration="2.094058032s" podCreationTimestamp="2026-02-02 17:33:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:33:40.081881417 +0000 UTC m=+1121.234296682" watchObservedRunningTime="2026-02-02 17:33:40.094058032 +0000 UTC m=+1121.246473327" Feb 02 17:33:40 crc kubenswrapper[4858]: I0202 17:33:40.426641 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7433df4-440b-4a0a-ad8e-f0bdece26bc3" path="/var/lib/kubelet/pods/b7433df4-440b-4a0a-ad8e-f0bdece26bc3/volumes" Feb 02 17:33:41 crc kubenswrapper[4858]: I0202 17:33:41.069704 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"53aea7b5-fb90-427a-b245-2c49de3d9ca4","Type":"ContainerStarted","Data":"ec7f43dd332db924d071caa1f4e6b590845f807c17211d96ce30dc7b9e664e66"} Feb 02 17:33:41 crc kubenswrapper[4858]: I0202 17:33:41.093680 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.093659417 podStartE2EDuration="2.093659417s" podCreationTimestamp="2026-02-02 17:33:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:33:41.088547043 +0000 UTC m=+1122.240962308" watchObservedRunningTime="2026-02-02 17:33:41.093659417 +0000 UTC m=+1122.246074682" Feb 02 17:33:44 crc kubenswrapper[4858]: I0202 17:33:44.426021 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 02 17:33:44 crc kubenswrapper[4858]: I0202 17:33:44.474702 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 02 17:33:44 crc kubenswrapper[4858]: I0202 17:33:44.483675 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 17:33:44 crc kubenswrapper[4858]: I0202 17:33:44.484768 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 17:33:45 crc kubenswrapper[4858]: I0202 17:33:45.500193 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3ab13622-4d61-4621-b865-45ede50fcaeb" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 17:33:45 crc kubenswrapper[4858]: I0202 17:33:45.500210 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3ab13622-4d61-4621-b865-45ede50fcaeb" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 17:33:45 crc kubenswrapper[4858]: I0202 17:33:45.898192 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 02 17:33:48 crc kubenswrapper[4858]: I0202 17:33:48.491425 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 17:33:48 crc kubenswrapper[4858]: I0202 17:33:48.491822 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 17:33:49 crc kubenswrapper[4858]: I0202 17:33:49.474691 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 02 17:33:49 crc kubenswrapper[4858]: I0202 17:33:49.526271 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 17:33:49 crc kubenswrapper[4858]: I0202 17:33:49.526698 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="59a6029b-5965-40a3-9dbd-0b4784340ce0" containerName="kube-state-metrics" containerID="cri-o://14e71f14a7bb05f3ce4f0fc3b81925ecdce5f94da39722236c53b6c8dadc5526" gracePeriod=30 Feb 02 17:33:49 crc kubenswrapper[4858]: I0202 17:33:49.529082 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 02 17:33:49 crc kubenswrapper[4858]: I0202 17:33:49.573157 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="79296b02-fa25-434c-ab5a-6ebb9a116de2" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 17:33:49 crc kubenswrapper[4858]: I0202 17:33:49.573167 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="79296b02-fa25-434c-ab5a-6ebb9a116de2" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 17:33:50 crc kubenswrapper[4858]: I0202 17:33:50.131279 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 17:33:50 crc kubenswrapper[4858]: I0202 17:33:50.162798 4858 generic.go:334] "Generic (PLEG): container finished" podID="59a6029b-5965-40a3-9dbd-0b4784340ce0" containerID="14e71f14a7bb05f3ce4f0fc3b81925ecdce5f94da39722236c53b6c8dadc5526" exitCode=2 Feb 02 17:33:50 crc kubenswrapper[4858]: I0202 17:33:50.163813 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 17:33:50 crc kubenswrapper[4858]: I0202 17:33:50.164268 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"59a6029b-5965-40a3-9dbd-0b4784340ce0","Type":"ContainerDied","Data":"14e71f14a7bb05f3ce4f0fc3b81925ecdce5f94da39722236c53b6c8dadc5526"} Feb 02 17:33:50 crc kubenswrapper[4858]: I0202 17:33:50.164292 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"59a6029b-5965-40a3-9dbd-0b4784340ce0","Type":"ContainerDied","Data":"68e339094d1b5d08f6ffd7e5729af4aa1b570ddd1e70d336db92bb286a602e53"} Feb 02 17:33:50 crc kubenswrapper[4858]: I0202 17:33:50.164309 4858 scope.go:117] "RemoveContainer" containerID="14e71f14a7bb05f3ce4f0fc3b81925ecdce5f94da39722236c53b6c8dadc5526" Feb 02 17:33:50 crc kubenswrapper[4858]: I0202 17:33:50.290056 4858 scope.go:117] "RemoveContainer" containerID="14e71f14a7bb05f3ce4f0fc3b81925ecdce5f94da39722236c53b6c8dadc5526" Feb 02 17:33:50 crc kubenswrapper[4858]: E0202 17:33:50.290630 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14e71f14a7bb05f3ce4f0fc3b81925ecdce5f94da39722236c53b6c8dadc5526\": container with ID starting with 14e71f14a7bb05f3ce4f0fc3b81925ecdce5f94da39722236c53b6c8dadc5526 not found: ID does not exist" containerID="14e71f14a7bb05f3ce4f0fc3b81925ecdce5f94da39722236c53b6c8dadc5526" Feb 02 17:33:50 crc kubenswrapper[4858]: I0202 17:33:50.290673 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14e71f14a7bb05f3ce4f0fc3b81925ecdce5f94da39722236c53b6c8dadc5526"} err="failed to get container status \"14e71f14a7bb05f3ce4f0fc3b81925ecdce5f94da39722236c53b6c8dadc5526\": rpc error: code = NotFound desc = could not find container \"14e71f14a7bb05f3ce4f0fc3b81925ecdce5f94da39722236c53b6c8dadc5526\": container with ID starting with 14e71f14a7bb05f3ce4f0fc3b81925ecdce5f94da39722236c53b6c8dadc5526 not found: ID does not exist" Feb 02 17:33:50 crc kubenswrapper[4858]: I0202 17:33:50.303239 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 02 17:33:50 crc kubenswrapper[4858]: I0202 17:33:50.327652 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2k5g\" (UniqueName: \"kubernetes.io/projected/59a6029b-5965-40a3-9dbd-0b4784340ce0-kube-api-access-r2k5g\") pod \"59a6029b-5965-40a3-9dbd-0b4784340ce0\" (UID: \"59a6029b-5965-40a3-9dbd-0b4784340ce0\") " Feb 02 17:33:50 crc kubenswrapper[4858]: I0202 17:33:50.333705 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59a6029b-5965-40a3-9dbd-0b4784340ce0-kube-api-access-r2k5g" (OuterVolumeSpecName: "kube-api-access-r2k5g") pod "59a6029b-5965-40a3-9dbd-0b4784340ce0" (UID: "59a6029b-5965-40a3-9dbd-0b4784340ce0"). InnerVolumeSpecName "kube-api-access-r2k5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:33:50 crc kubenswrapper[4858]: I0202 17:33:50.431296 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2k5g\" (UniqueName: \"kubernetes.io/projected/59a6029b-5965-40a3-9dbd-0b4784340ce0-kube-api-access-r2k5g\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:50 crc kubenswrapper[4858]: I0202 17:33:50.486248 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 17:33:50 crc kubenswrapper[4858]: I0202 17:33:50.493400 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 17:33:50 crc kubenswrapper[4858]: I0202 17:33:50.516400 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 17:33:50 crc kubenswrapper[4858]: E0202 17:33:50.516808 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59a6029b-5965-40a3-9dbd-0b4784340ce0" containerName="kube-state-metrics" Feb 02 17:33:50 crc kubenswrapper[4858]: I0202 17:33:50.516825 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="59a6029b-5965-40a3-9dbd-0b4784340ce0" containerName="kube-state-metrics" Feb 02 17:33:50 crc kubenswrapper[4858]: I0202 17:33:50.517020 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="59a6029b-5965-40a3-9dbd-0b4784340ce0" containerName="kube-state-metrics" Feb 02 17:33:50 crc kubenswrapper[4858]: I0202 17:33:50.517666 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 17:33:50 crc kubenswrapper[4858]: I0202 17:33:50.522061 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 02 17:33:50 crc kubenswrapper[4858]: I0202 17:33:50.522428 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 02 17:33:50 crc kubenswrapper[4858]: I0202 17:33:50.531666 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 17:33:50 crc kubenswrapper[4858]: I0202 17:33:50.534016 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/25eaef2f-c235-44b2-847b-6d4a275f1c3d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"25eaef2f-c235-44b2-847b-6d4a275f1c3d\") " pod="openstack/kube-state-metrics-0" Feb 02 17:33:50 crc kubenswrapper[4858]: I0202 17:33:50.534121 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25eaef2f-c235-44b2-847b-6d4a275f1c3d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"25eaef2f-c235-44b2-847b-6d4a275f1c3d\") " pod="openstack/kube-state-metrics-0" Feb 02 17:33:50 crc kubenswrapper[4858]: I0202 17:33:50.534190 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/25eaef2f-c235-44b2-847b-6d4a275f1c3d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"25eaef2f-c235-44b2-847b-6d4a275f1c3d\") " pod="openstack/kube-state-metrics-0" Feb 02 17:33:50 crc kubenswrapper[4858]: I0202 17:33:50.534290 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtzvn\" (UniqueName: \"kubernetes.io/projected/25eaef2f-c235-44b2-847b-6d4a275f1c3d-kube-api-access-xtzvn\") pod \"kube-state-metrics-0\" (UID: \"25eaef2f-c235-44b2-847b-6d4a275f1c3d\") " pod="openstack/kube-state-metrics-0" Feb 02 17:33:50 crc kubenswrapper[4858]: I0202 17:33:50.635874 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtzvn\" (UniqueName: \"kubernetes.io/projected/25eaef2f-c235-44b2-847b-6d4a275f1c3d-kube-api-access-xtzvn\") pod \"kube-state-metrics-0\" (UID: \"25eaef2f-c235-44b2-847b-6d4a275f1c3d\") " pod="openstack/kube-state-metrics-0" Feb 02 17:33:50 crc kubenswrapper[4858]: I0202 17:33:50.636295 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/25eaef2f-c235-44b2-847b-6d4a275f1c3d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"25eaef2f-c235-44b2-847b-6d4a275f1c3d\") " pod="openstack/kube-state-metrics-0" Feb 02 17:33:50 crc kubenswrapper[4858]: I0202 17:33:50.636446 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25eaef2f-c235-44b2-847b-6d4a275f1c3d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"25eaef2f-c235-44b2-847b-6d4a275f1c3d\") " pod="openstack/kube-state-metrics-0" Feb 02 17:33:50 crc kubenswrapper[4858]: I0202 17:33:50.636617 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/25eaef2f-c235-44b2-847b-6d4a275f1c3d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"25eaef2f-c235-44b2-847b-6d4a275f1c3d\") " pod="openstack/kube-state-metrics-0" Feb 02 17:33:50 crc kubenswrapper[4858]: I0202 17:33:50.643552 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/25eaef2f-c235-44b2-847b-6d4a275f1c3d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"25eaef2f-c235-44b2-847b-6d4a275f1c3d\") " pod="openstack/kube-state-metrics-0" Feb 02 17:33:50 crc kubenswrapper[4858]: I0202 17:33:50.644744 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25eaef2f-c235-44b2-847b-6d4a275f1c3d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"25eaef2f-c235-44b2-847b-6d4a275f1c3d\") " pod="openstack/kube-state-metrics-0" Feb 02 17:33:50 crc kubenswrapper[4858]: I0202 17:33:50.644922 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/25eaef2f-c235-44b2-847b-6d4a275f1c3d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"25eaef2f-c235-44b2-847b-6d4a275f1c3d\") " pod="openstack/kube-state-metrics-0" Feb 02 17:33:50 crc kubenswrapper[4858]: I0202 17:33:50.657504 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtzvn\" (UniqueName: \"kubernetes.io/projected/25eaef2f-c235-44b2-847b-6d4a275f1c3d-kube-api-access-xtzvn\") pod \"kube-state-metrics-0\" (UID: \"25eaef2f-c235-44b2-847b-6d4a275f1c3d\") " pod="openstack/kube-state-metrics-0" Feb 02 17:33:50 crc kubenswrapper[4858]: I0202 17:33:50.837616 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 17:33:51 crc kubenswrapper[4858]: I0202 17:33:51.419680 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 17:33:52 crc kubenswrapper[4858]: I0202 17:33:52.191563 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"25eaef2f-c235-44b2-847b-6d4a275f1c3d","Type":"ContainerStarted","Data":"e506cadfef9d7380fc9657d4df9d83da136e146743dea2026261bf49a64e9aac"} Feb 02 17:33:52 crc kubenswrapper[4858]: I0202 17:33:52.191934 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"25eaef2f-c235-44b2-847b-6d4a275f1c3d","Type":"ContainerStarted","Data":"bb04bf76606e4ed5bc3e2a7e6f252c7f94b42dbc529a29c3391b495379ff9ad6"} Feb 02 17:33:52 crc kubenswrapper[4858]: I0202 17:33:52.191959 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 02 17:33:52 crc kubenswrapper[4858]: I0202 17:33:52.221527 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.8560869 podStartE2EDuration="2.22149328s" podCreationTimestamp="2026-02-02 17:33:50 +0000 UTC" firstStartedPulling="2026-02-02 17:33:51.429444898 +0000 UTC m=+1132.581860153" lastFinishedPulling="2026-02-02 17:33:51.794851278 +0000 UTC m=+1132.947266533" observedRunningTime="2026-02-02 17:33:52.207128364 +0000 UTC m=+1133.359543629" watchObservedRunningTime="2026-02-02 17:33:52.22149328 +0000 UTC m=+1133.373908545" Feb 02 17:33:52 crc kubenswrapper[4858]: I0202 17:33:52.410298 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59a6029b-5965-40a3-9dbd-0b4784340ce0" path="/var/lib/kubelet/pods/59a6029b-5965-40a3-9dbd-0b4784340ce0/volumes" Feb 02 17:33:52 crc kubenswrapper[4858]: I0202 17:33:52.468501 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:33:52 crc kubenswrapper[4858]: I0202 17:33:52.468760 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ab3475a4-dc06-489e-a4de-ac9a204c5248" containerName="ceilometer-central-agent" containerID="cri-o://d2b6935a1d58e12c2f15504d10d99db8f0c2e950bd30bf2914e1bc1b523a4d0a" gracePeriod=30 Feb 02 17:33:52 crc kubenswrapper[4858]: I0202 17:33:52.469158 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ab3475a4-dc06-489e-a4de-ac9a204c5248" containerName="sg-core" containerID="cri-o://e6f2d7175b77045cd25d7910599c3ee2e43313b2f29b1d73338c066049051c4e" gracePeriod=30 Feb 02 17:33:52 crc kubenswrapper[4858]: I0202 17:33:52.469182 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ab3475a4-dc06-489e-a4de-ac9a204c5248" containerName="proxy-httpd" containerID="cri-o://87a8835029baa720a68e63ab423f6ae00259824609f20fa6562fdcbcad043224" gracePeriod=30 Feb 02 17:33:52 crc kubenswrapper[4858]: I0202 17:33:52.469228 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ab3475a4-dc06-489e-a4de-ac9a204c5248" containerName="ceilometer-notification-agent" containerID="cri-o://c842b3dd2c49df6a91cd4742c819516c4ffc305aa4314dead79290123b2b39ee" gracePeriod=30 Feb 02 17:33:53 crc kubenswrapper[4858]: I0202 17:33:53.206484 4858 generic.go:334] "Generic (PLEG): container finished" podID="ab3475a4-dc06-489e-a4de-ac9a204c5248" containerID="87a8835029baa720a68e63ab423f6ae00259824609f20fa6562fdcbcad043224" exitCode=0 Feb 02 17:33:53 crc kubenswrapper[4858]: I0202 17:33:53.206537 4858 generic.go:334] "Generic (PLEG): container finished" podID="ab3475a4-dc06-489e-a4de-ac9a204c5248" containerID="e6f2d7175b77045cd25d7910599c3ee2e43313b2f29b1d73338c066049051c4e" exitCode=2 Feb 02 17:33:53 crc kubenswrapper[4858]: I0202 17:33:53.206548 4858 generic.go:334] "Generic (PLEG): container finished" podID="ab3475a4-dc06-489e-a4de-ac9a204c5248" containerID="d2b6935a1d58e12c2f15504d10d99db8f0c2e950bd30bf2914e1bc1b523a4d0a" exitCode=0 Feb 02 17:33:53 crc kubenswrapper[4858]: I0202 17:33:53.206581 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab3475a4-dc06-489e-a4de-ac9a204c5248","Type":"ContainerDied","Data":"87a8835029baa720a68e63ab423f6ae00259824609f20fa6562fdcbcad043224"} Feb 02 17:33:53 crc kubenswrapper[4858]: I0202 17:33:53.206656 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab3475a4-dc06-489e-a4de-ac9a204c5248","Type":"ContainerDied","Data":"e6f2d7175b77045cd25d7910599c3ee2e43313b2f29b1d73338c066049051c4e"} Feb 02 17:33:53 crc kubenswrapper[4858]: I0202 17:33:53.206685 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab3475a4-dc06-489e-a4de-ac9a204c5248","Type":"ContainerDied","Data":"d2b6935a1d58e12c2f15504d10d99db8f0c2e950bd30bf2914e1bc1b523a4d0a"} Feb 02 17:33:54 crc kubenswrapper[4858]: I0202 17:33:54.490543 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 17:33:54 crc kubenswrapper[4858]: I0202 17:33:54.499088 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 17:33:54 crc kubenswrapper[4858]: I0202 17:33:54.499319 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 17:33:55 crc kubenswrapper[4858]: I0202 17:33:55.226199 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.151917 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.234637 4858 generic.go:334] "Generic (PLEG): container finished" podID="ab3475a4-dc06-489e-a4de-ac9a204c5248" containerID="c842b3dd2c49df6a91cd4742c819516c4ffc305aa4314dead79290123b2b39ee" exitCode=0 Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.234700 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.234722 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab3475a4-dc06-489e-a4de-ac9a204c5248","Type":"ContainerDied","Data":"c842b3dd2c49df6a91cd4742c819516c4ffc305aa4314dead79290123b2b39ee"} Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.234779 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab3475a4-dc06-489e-a4de-ac9a204c5248","Type":"ContainerDied","Data":"a5c2cf06012630c4d7e7fd6390cb7a4f45205b88091dc7a6e46768d89fd6b9c0"} Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.234800 4858 scope.go:117] "RemoveContainer" containerID="87a8835029baa720a68e63ab423f6ae00259824609f20fa6562fdcbcad043224" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.250313 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab3475a4-dc06-489e-a4de-ac9a204c5248-sg-core-conf-yaml\") pod \"ab3475a4-dc06-489e-a4de-ac9a204c5248\" (UID: \"ab3475a4-dc06-489e-a4de-ac9a204c5248\") " Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.250369 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab3475a4-dc06-489e-a4de-ac9a204c5248-combined-ca-bundle\") pod \"ab3475a4-dc06-489e-a4de-ac9a204c5248\" (UID: \"ab3475a4-dc06-489e-a4de-ac9a204c5248\") " Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.250411 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab3475a4-dc06-489e-a4de-ac9a204c5248-scripts\") pod \"ab3475a4-dc06-489e-a4de-ac9a204c5248\" (UID: \"ab3475a4-dc06-489e-a4de-ac9a204c5248\") " Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.250468 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfhsv\" (UniqueName: \"kubernetes.io/projected/ab3475a4-dc06-489e-a4de-ac9a204c5248-kube-api-access-tfhsv\") pod \"ab3475a4-dc06-489e-a4de-ac9a204c5248\" (UID: \"ab3475a4-dc06-489e-a4de-ac9a204c5248\") " Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.250638 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab3475a4-dc06-489e-a4de-ac9a204c5248-run-httpd\") pod \"ab3475a4-dc06-489e-a4de-ac9a204c5248\" (UID: \"ab3475a4-dc06-489e-a4de-ac9a204c5248\") " Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.250705 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab3475a4-dc06-489e-a4de-ac9a204c5248-log-httpd\") pod \"ab3475a4-dc06-489e-a4de-ac9a204c5248\" (UID: \"ab3475a4-dc06-489e-a4de-ac9a204c5248\") " Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.250734 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab3475a4-dc06-489e-a4de-ac9a204c5248-config-data\") pod \"ab3475a4-dc06-489e-a4de-ac9a204c5248\" (UID: \"ab3475a4-dc06-489e-a4de-ac9a204c5248\") " Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.251246 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab3475a4-dc06-489e-a4de-ac9a204c5248-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ab3475a4-dc06-489e-a4de-ac9a204c5248" (UID: "ab3475a4-dc06-489e-a4de-ac9a204c5248"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.251615 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab3475a4-dc06-489e-a4de-ac9a204c5248-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ab3475a4-dc06-489e-a4de-ac9a204c5248" (UID: "ab3475a4-dc06-489e-a4de-ac9a204c5248"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.254412 4858 scope.go:117] "RemoveContainer" containerID="e6f2d7175b77045cd25d7910599c3ee2e43313b2f29b1d73338c066049051c4e" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.258683 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab3475a4-dc06-489e-a4de-ac9a204c5248-scripts" (OuterVolumeSpecName: "scripts") pod "ab3475a4-dc06-489e-a4de-ac9a204c5248" (UID: "ab3475a4-dc06-489e-a4de-ac9a204c5248"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.259085 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab3475a4-dc06-489e-a4de-ac9a204c5248-kube-api-access-tfhsv" (OuterVolumeSpecName: "kube-api-access-tfhsv") pod "ab3475a4-dc06-489e-a4de-ac9a204c5248" (UID: "ab3475a4-dc06-489e-a4de-ac9a204c5248"). InnerVolumeSpecName "kube-api-access-tfhsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.280146 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab3475a4-dc06-489e-a4de-ac9a204c5248-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ab3475a4-dc06-489e-a4de-ac9a204c5248" (UID: "ab3475a4-dc06-489e-a4de-ac9a204c5248"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.321816 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab3475a4-dc06-489e-a4de-ac9a204c5248-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab3475a4-dc06-489e-a4de-ac9a204c5248" (UID: "ab3475a4-dc06-489e-a4de-ac9a204c5248"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.342376 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab3475a4-dc06-489e-a4de-ac9a204c5248-config-data" (OuterVolumeSpecName: "config-data") pod "ab3475a4-dc06-489e-a4de-ac9a204c5248" (UID: "ab3475a4-dc06-489e-a4de-ac9a204c5248"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.352920 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab3475a4-dc06-489e-a4de-ac9a204c5248-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.352953 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab3475a4-dc06-489e-a4de-ac9a204c5248-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.352962 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab3475a4-dc06-489e-a4de-ac9a204c5248-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.352997 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab3475a4-dc06-489e-a4de-ac9a204c5248-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.353013 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab3475a4-dc06-489e-a4de-ac9a204c5248-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.353021 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab3475a4-dc06-489e-a4de-ac9a204c5248-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.353029 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfhsv\" (UniqueName: \"kubernetes.io/projected/ab3475a4-dc06-489e-a4de-ac9a204c5248-kube-api-access-tfhsv\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.364324 4858 scope.go:117] "RemoveContainer" containerID="c842b3dd2c49df6a91cd4742c819516c4ffc305aa4314dead79290123b2b39ee" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.383642 4858 scope.go:117] "RemoveContainer" containerID="d2b6935a1d58e12c2f15504d10d99db8f0c2e950bd30bf2914e1bc1b523a4d0a" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.400951 4858 scope.go:117] "RemoveContainer" containerID="87a8835029baa720a68e63ab423f6ae00259824609f20fa6562fdcbcad043224" Feb 02 17:33:56 crc kubenswrapper[4858]: E0202 17:33:56.401402 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87a8835029baa720a68e63ab423f6ae00259824609f20fa6562fdcbcad043224\": container with ID starting with 87a8835029baa720a68e63ab423f6ae00259824609f20fa6562fdcbcad043224 not found: ID does not exist" containerID="87a8835029baa720a68e63ab423f6ae00259824609f20fa6562fdcbcad043224" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.401437 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87a8835029baa720a68e63ab423f6ae00259824609f20fa6562fdcbcad043224"} err="failed to get container status \"87a8835029baa720a68e63ab423f6ae00259824609f20fa6562fdcbcad043224\": rpc error: code = NotFound desc = could not find container \"87a8835029baa720a68e63ab423f6ae00259824609f20fa6562fdcbcad043224\": container with ID starting with 87a8835029baa720a68e63ab423f6ae00259824609f20fa6562fdcbcad043224 not found: ID does not exist" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.401457 4858 scope.go:117] "RemoveContainer" containerID="e6f2d7175b77045cd25d7910599c3ee2e43313b2f29b1d73338c066049051c4e" Feb 02 17:33:56 crc kubenswrapper[4858]: E0202 17:33:56.401794 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6f2d7175b77045cd25d7910599c3ee2e43313b2f29b1d73338c066049051c4e\": container with ID starting with e6f2d7175b77045cd25d7910599c3ee2e43313b2f29b1d73338c066049051c4e not found: ID does not exist" containerID="e6f2d7175b77045cd25d7910599c3ee2e43313b2f29b1d73338c066049051c4e" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.401870 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6f2d7175b77045cd25d7910599c3ee2e43313b2f29b1d73338c066049051c4e"} err="failed to get container status \"e6f2d7175b77045cd25d7910599c3ee2e43313b2f29b1d73338c066049051c4e\": rpc error: code = NotFound desc = could not find container \"e6f2d7175b77045cd25d7910599c3ee2e43313b2f29b1d73338c066049051c4e\": container with ID starting with e6f2d7175b77045cd25d7910599c3ee2e43313b2f29b1d73338c066049051c4e not found: ID does not exist" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.401941 4858 scope.go:117] "RemoveContainer" containerID="c842b3dd2c49df6a91cd4742c819516c4ffc305aa4314dead79290123b2b39ee" Feb 02 17:33:56 crc kubenswrapper[4858]: E0202 17:33:56.402219 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c842b3dd2c49df6a91cd4742c819516c4ffc305aa4314dead79290123b2b39ee\": container with ID starting with c842b3dd2c49df6a91cd4742c819516c4ffc305aa4314dead79290123b2b39ee not found: ID does not exist" containerID="c842b3dd2c49df6a91cd4742c819516c4ffc305aa4314dead79290123b2b39ee" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.402243 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c842b3dd2c49df6a91cd4742c819516c4ffc305aa4314dead79290123b2b39ee"} err="failed to get container status \"c842b3dd2c49df6a91cd4742c819516c4ffc305aa4314dead79290123b2b39ee\": rpc error: code = NotFound desc = could not find container \"c842b3dd2c49df6a91cd4742c819516c4ffc305aa4314dead79290123b2b39ee\": container with ID starting with c842b3dd2c49df6a91cd4742c819516c4ffc305aa4314dead79290123b2b39ee not found: ID does not exist" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.402259 4858 scope.go:117] "RemoveContainer" containerID="d2b6935a1d58e12c2f15504d10d99db8f0c2e950bd30bf2914e1bc1b523a4d0a" Feb 02 17:33:56 crc kubenswrapper[4858]: E0202 17:33:56.402509 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2b6935a1d58e12c2f15504d10d99db8f0c2e950bd30bf2914e1bc1b523a4d0a\": container with ID starting with d2b6935a1d58e12c2f15504d10d99db8f0c2e950bd30bf2914e1bc1b523a4d0a not found: ID does not exist" containerID="d2b6935a1d58e12c2f15504d10d99db8f0c2e950bd30bf2914e1bc1b523a4d0a" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.402601 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2b6935a1d58e12c2f15504d10d99db8f0c2e950bd30bf2914e1bc1b523a4d0a"} err="failed to get container status \"d2b6935a1d58e12c2f15504d10d99db8f0c2e950bd30bf2914e1bc1b523a4d0a\": rpc error: code = NotFound desc = could not find container \"d2b6935a1d58e12c2f15504d10d99db8f0c2e950bd30bf2914e1bc1b523a4d0a\": container with ID starting with d2b6935a1d58e12c2f15504d10d99db8f0c2e950bd30bf2914e1bc1b523a4d0a not found: ID does not exist" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.560648 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.571301 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.583737 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:33:56 crc kubenswrapper[4858]: E0202 17:33:56.584136 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab3475a4-dc06-489e-a4de-ac9a204c5248" containerName="sg-core" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.584152 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab3475a4-dc06-489e-a4de-ac9a204c5248" containerName="sg-core" Feb 02 17:33:56 crc kubenswrapper[4858]: E0202 17:33:56.584173 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab3475a4-dc06-489e-a4de-ac9a204c5248" containerName="ceilometer-central-agent" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.584180 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab3475a4-dc06-489e-a4de-ac9a204c5248" containerName="ceilometer-central-agent" Feb 02 17:33:56 crc kubenswrapper[4858]: E0202 17:33:56.584193 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab3475a4-dc06-489e-a4de-ac9a204c5248" containerName="proxy-httpd" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.584200 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab3475a4-dc06-489e-a4de-ac9a204c5248" containerName="proxy-httpd" Feb 02 17:33:56 crc kubenswrapper[4858]: E0202 17:33:56.584213 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab3475a4-dc06-489e-a4de-ac9a204c5248" containerName="ceilometer-notification-agent" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.584219 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab3475a4-dc06-489e-a4de-ac9a204c5248" containerName="ceilometer-notification-agent" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.584373 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab3475a4-dc06-489e-a4de-ac9a204c5248" containerName="sg-core" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.584385 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab3475a4-dc06-489e-a4de-ac9a204c5248" containerName="ceilometer-central-agent" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.584399 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab3475a4-dc06-489e-a4de-ac9a204c5248" containerName="ceilometer-notification-agent" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.584411 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab3475a4-dc06-489e-a4de-ac9a204c5248" containerName="proxy-httpd" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.586054 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.587912 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.590594 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.591041 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.598349 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.759655 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42d83da4-faa1-4e84-b536-77ee466428fb-run-httpd\") pod \"ceilometer-0\" (UID: \"42d83da4-faa1-4e84-b536-77ee466428fb\") " pod="openstack/ceilometer-0" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.759958 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42d83da4-faa1-4e84-b536-77ee466428fb-config-data\") pod \"ceilometer-0\" (UID: \"42d83da4-faa1-4e84-b536-77ee466428fb\") " pod="openstack/ceilometer-0" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.760082 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42d83da4-faa1-4e84-b536-77ee466428fb-log-httpd\") pod \"ceilometer-0\" (UID: \"42d83da4-faa1-4e84-b536-77ee466428fb\") " pod="openstack/ceilometer-0" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.760208 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d83da4-faa1-4e84-b536-77ee466428fb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"42d83da4-faa1-4e84-b536-77ee466428fb\") " pod="openstack/ceilometer-0" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.760288 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42d83da4-faa1-4e84-b536-77ee466428fb-scripts\") pod \"ceilometer-0\" (UID: \"42d83da4-faa1-4e84-b536-77ee466428fb\") " pod="openstack/ceilometer-0" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.760355 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/42d83da4-faa1-4e84-b536-77ee466428fb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"42d83da4-faa1-4e84-b536-77ee466428fb\") " pod="openstack/ceilometer-0" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.760444 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/42d83da4-faa1-4e84-b536-77ee466428fb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"42d83da4-faa1-4e84-b536-77ee466428fb\") " pod="openstack/ceilometer-0" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.760530 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gckv\" (UniqueName: \"kubernetes.io/projected/42d83da4-faa1-4e84-b536-77ee466428fb-kube-api-access-5gckv\") pod \"ceilometer-0\" (UID: \"42d83da4-faa1-4e84-b536-77ee466428fb\") " pod="openstack/ceilometer-0" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.862054 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42d83da4-faa1-4e84-b536-77ee466428fb-run-httpd\") pod \"ceilometer-0\" (UID: \"42d83da4-faa1-4e84-b536-77ee466428fb\") " pod="openstack/ceilometer-0" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.862111 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42d83da4-faa1-4e84-b536-77ee466428fb-config-data\") pod \"ceilometer-0\" (UID: \"42d83da4-faa1-4e84-b536-77ee466428fb\") " pod="openstack/ceilometer-0" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.862148 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42d83da4-faa1-4e84-b536-77ee466428fb-log-httpd\") pod \"ceilometer-0\" (UID: \"42d83da4-faa1-4e84-b536-77ee466428fb\") " pod="openstack/ceilometer-0" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.862188 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d83da4-faa1-4e84-b536-77ee466428fb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"42d83da4-faa1-4e84-b536-77ee466428fb\") " pod="openstack/ceilometer-0" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.862205 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42d83da4-faa1-4e84-b536-77ee466428fb-scripts\") pod \"ceilometer-0\" (UID: \"42d83da4-faa1-4e84-b536-77ee466428fb\") " pod="openstack/ceilometer-0" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.862223 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/42d83da4-faa1-4e84-b536-77ee466428fb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"42d83da4-faa1-4e84-b536-77ee466428fb\") " pod="openstack/ceilometer-0" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.862255 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/42d83da4-faa1-4e84-b536-77ee466428fb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"42d83da4-faa1-4e84-b536-77ee466428fb\") " pod="openstack/ceilometer-0" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.862277 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gckv\" (UniqueName: \"kubernetes.io/projected/42d83da4-faa1-4e84-b536-77ee466428fb-kube-api-access-5gckv\") pod \"ceilometer-0\" (UID: \"42d83da4-faa1-4e84-b536-77ee466428fb\") " pod="openstack/ceilometer-0" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.862959 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42d83da4-faa1-4e84-b536-77ee466428fb-run-httpd\") pod \"ceilometer-0\" (UID: \"42d83da4-faa1-4e84-b536-77ee466428fb\") " pod="openstack/ceilometer-0" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.863501 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42d83da4-faa1-4e84-b536-77ee466428fb-log-httpd\") pod \"ceilometer-0\" (UID: \"42d83da4-faa1-4e84-b536-77ee466428fb\") " pod="openstack/ceilometer-0" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.867555 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/42d83da4-faa1-4e84-b536-77ee466428fb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"42d83da4-faa1-4e84-b536-77ee466428fb\") " pod="openstack/ceilometer-0" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.868048 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d83da4-faa1-4e84-b536-77ee466428fb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"42d83da4-faa1-4e84-b536-77ee466428fb\") " pod="openstack/ceilometer-0" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.868253 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/42d83da4-faa1-4e84-b536-77ee466428fb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"42d83da4-faa1-4e84-b536-77ee466428fb\") " pod="openstack/ceilometer-0" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.870654 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42d83da4-faa1-4e84-b536-77ee466428fb-config-data\") pod \"ceilometer-0\" (UID: \"42d83da4-faa1-4e84-b536-77ee466428fb\") " pod="openstack/ceilometer-0" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.883223 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42d83da4-faa1-4e84-b536-77ee466428fb-scripts\") pod \"ceilometer-0\" (UID: \"42d83da4-faa1-4e84-b536-77ee466428fb\") " pod="openstack/ceilometer-0" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.890716 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gckv\" (UniqueName: \"kubernetes.io/projected/42d83da4-faa1-4e84-b536-77ee466428fb-kube-api-access-5gckv\") pod \"ceilometer-0\" (UID: \"42d83da4-faa1-4e84-b536-77ee466428fb\") " pod="openstack/ceilometer-0" Feb 02 17:33:56 crc kubenswrapper[4858]: I0202 17:33:56.901455 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 17:33:57 crc kubenswrapper[4858]: I0202 17:33:57.076932 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 17:33:57 crc kubenswrapper[4858]: I0202 17:33:57.167643 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5kx6\" (UniqueName: \"kubernetes.io/projected/aa9cda80-8059-476f-a7f3-710bb907548f-kube-api-access-h5kx6\") pod \"aa9cda80-8059-476f-a7f3-710bb907548f\" (UID: \"aa9cda80-8059-476f-a7f3-710bb907548f\") " Feb 02 17:33:57 crc kubenswrapper[4858]: I0202 17:33:57.167719 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa9cda80-8059-476f-a7f3-710bb907548f-config-data\") pod \"aa9cda80-8059-476f-a7f3-710bb907548f\" (UID: \"aa9cda80-8059-476f-a7f3-710bb907548f\") " Feb 02 17:33:57 crc kubenswrapper[4858]: I0202 17:33:57.167799 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa9cda80-8059-476f-a7f3-710bb907548f-combined-ca-bundle\") pod \"aa9cda80-8059-476f-a7f3-710bb907548f\" (UID: \"aa9cda80-8059-476f-a7f3-710bb907548f\") " Feb 02 17:33:57 crc kubenswrapper[4858]: I0202 17:33:57.174485 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa9cda80-8059-476f-a7f3-710bb907548f-kube-api-access-h5kx6" (OuterVolumeSpecName: "kube-api-access-h5kx6") pod "aa9cda80-8059-476f-a7f3-710bb907548f" (UID: "aa9cda80-8059-476f-a7f3-710bb907548f"). InnerVolumeSpecName "kube-api-access-h5kx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:33:57 crc kubenswrapper[4858]: I0202 17:33:57.218597 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa9cda80-8059-476f-a7f3-710bb907548f-config-data" (OuterVolumeSpecName: "config-data") pod "aa9cda80-8059-476f-a7f3-710bb907548f" (UID: "aa9cda80-8059-476f-a7f3-710bb907548f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:33:57 crc kubenswrapper[4858]: I0202 17:33:57.221860 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa9cda80-8059-476f-a7f3-710bb907548f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa9cda80-8059-476f-a7f3-710bb907548f" (UID: "aa9cda80-8059-476f-a7f3-710bb907548f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:33:57 crc kubenswrapper[4858]: I0202 17:33:57.244880 4858 generic.go:334] "Generic (PLEG): container finished" podID="aa9cda80-8059-476f-a7f3-710bb907548f" containerID="7bca33acfbeab541042e8af6c4b6902a26f4672cf55fc38cedae257de965cc89" exitCode=137 Feb 02 17:33:57 crc kubenswrapper[4858]: I0202 17:33:57.244958 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 17:33:57 crc kubenswrapper[4858]: I0202 17:33:57.245012 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"aa9cda80-8059-476f-a7f3-710bb907548f","Type":"ContainerDied","Data":"7bca33acfbeab541042e8af6c4b6902a26f4672cf55fc38cedae257de965cc89"} Feb 02 17:33:57 crc kubenswrapper[4858]: I0202 17:33:57.245043 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"aa9cda80-8059-476f-a7f3-710bb907548f","Type":"ContainerDied","Data":"7a8ce7e3c97b7e3e2771fed04eb90ff6311e173fc2c8a879e3597d81d2b7b5d7"} Feb 02 17:33:57 crc kubenswrapper[4858]: I0202 17:33:57.245063 4858 scope.go:117] "RemoveContainer" containerID="7bca33acfbeab541042e8af6c4b6902a26f4672cf55fc38cedae257de965cc89" Feb 02 17:33:57 crc kubenswrapper[4858]: I0202 17:33:57.271429 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa9cda80-8059-476f-a7f3-710bb907548f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:57 crc kubenswrapper[4858]: I0202 17:33:57.271465 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5kx6\" (UniqueName: \"kubernetes.io/projected/aa9cda80-8059-476f-a7f3-710bb907548f-kube-api-access-h5kx6\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:57 crc kubenswrapper[4858]: I0202 17:33:57.271477 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa9cda80-8059-476f-a7f3-710bb907548f-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:33:57 crc kubenswrapper[4858]: I0202 17:33:57.287400 4858 scope.go:117] "RemoveContainer" containerID="7bca33acfbeab541042e8af6c4b6902a26f4672cf55fc38cedae257de965cc89" Feb 02 17:33:57 crc kubenswrapper[4858]: E0202 17:33:57.293660 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bca33acfbeab541042e8af6c4b6902a26f4672cf55fc38cedae257de965cc89\": container with ID starting with 7bca33acfbeab541042e8af6c4b6902a26f4672cf55fc38cedae257de965cc89 not found: ID does not exist" containerID="7bca33acfbeab541042e8af6c4b6902a26f4672cf55fc38cedae257de965cc89" Feb 02 17:33:57 crc kubenswrapper[4858]: I0202 17:33:57.293716 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bca33acfbeab541042e8af6c4b6902a26f4672cf55fc38cedae257de965cc89"} err="failed to get container status \"7bca33acfbeab541042e8af6c4b6902a26f4672cf55fc38cedae257de965cc89\": rpc error: code = NotFound desc = could not find container \"7bca33acfbeab541042e8af6c4b6902a26f4672cf55fc38cedae257de965cc89\": container with ID starting with 7bca33acfbeab541042e8af6c4b6902a26f4672cf55fc38cedae257de965cc89 not found: ID does not exist" Feb 02 17:33:57 crc kubenswrapper[4858]: I0202 17:33:57.295736 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 17:33:57 crc kubenswrapper[4858]: I0202 17:33:57.315294 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 17:33:57 crc kubenswrapper[4858]: I0202 17:33:57.324830 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 17:33:57 crc kubenswrapper[4858]: E0202 17:33:57.325450 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa9cda80-8059-476f-a7f3-710bb907548f" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 17:33:57 crc kubenswrapper[4858]: I0202 17:33:57.325471 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa9cda80-8059-476f-a7f3-710bb907548f" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 17:33:57 crc kubenswrapper[4858]: I0202 17:33:57.325679 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa9cda80-8059-476f-a7f3-710bb907548f" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 17:33:57 crc kubenswrapper[4858]: I0202 17:33:57.326368 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 17:33:57 crc kubenswrapper[4858]: I0202 17:33:57.328277 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 02 17:33:57 crc kubenswrapper[4858]: I0202 17:33:57.328608 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 02 17:33:57 crc kubenswrapper[4858]: I0202 17:33:57.328779 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 02 17:33:57 crc kubenswrapper[4858]: I0202 17:33:57.333323 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 17:33:57 crc kubenswrapper[4858]: I0202 17:33:57.378105 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:33:57 crc kubenswrapper[4858]: W0202 17:33:57.382616 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42d83da4_faa1_4e84_b536_77ee466428fb.slice/crio-aee21e7fca17f75931811074f5257607b3c98f530d3a174931e85ddb1f27b9c3 WatchSource:0}: Error finding container aee21e7fca17f75931811074f5257607b3c98f530d3a174931e85ddb1f27b9c3: Status 404 returned error can't find the container with id aee21e7fca17f75931811074f5257607b3c98f530d3a174931e85ddb1f27b9c3 Feb 02 17:33:57 crc kubenswrapper[4858]: I0202 17:33:57.474273 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/319e3f38-af96-4ac6-9791-094f9a7d67ab-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"319e3f38-af96-4ac6-9791-094f9a7d67ab\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 17:33:57 crc kubenswrapper[4858]: I0202 17:33:57.474547 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdnw6\" (UniqueName: \"kubernetes.io/projected/319e3f38-af96-4ac6-9791-094f9a7d67ab-kube-api-access-sdnw6\") pod \"nova-cell1-novncproxy-0\" (UID: \"319e3f38-af96-4ac6-9791-094f9a7d67ab\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 17:33:57 crc kubenswrapper[4858]: I0202 17:33:57.474673 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/319e3f38-af96-4ac6-9791-094f9a7d67ab-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"319e3f38-af96-4ac6-9791-094f9a7d67ab\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 17:33:57 crc kubenswrapper[4858]: I0202 17:33:57.474836 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319e3f38-af96-4ac6-9791-094f9a7d67ab-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"319e3f38-af96-4ac6-9791-094f9a7d67ab\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 17:33:57 crc kubenswrapper[4858]: I0202 17:33:57.474904 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/319e3f38-af96-4ac6-9791-094f9a7d67ab-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"319e3f38-af96-4ac6-9791-094f9a7d67ab\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 17:33:57 crc kubenswrapper[4858]: I0202 17:33:57.576483 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/319e3f38-af96-4ac6-9791-094f9a7d67ab-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"319e3f38-af96-4ac6-9791-094f9a7d67ab\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 17:33:57 crc kubenswrapper[4858]: I0202 17:33:57.576535 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdnw6\" (UniqueName: \"kubernetes.io/projected/319e3f38-af96-4ac6-9791-094f9a7d67ab-kube-api-access-sdnw6\") pod \"nova-cell1-novncproxy-0\" (UID: \"319e3f38-af96-4ac6-9791-094f9a7d67ab\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 17:33:57 crc kubenswrapper[4858]: I0202 17:33:57.576603 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/319e3f38-af96-4ac6-9791-094f9a7d67ab-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"319e3f38-af96-4ac6-9791-094f9a7d67ab\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 17:33:57 crc kubenswrapper[4858]: I0202 17:33:57.576683 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319e3f38-af96-4ac6-9791-094f9a7d67ab-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"319e3f38-af96-4ac6-9791-094f9a7d67ab\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 17:33:57 crc kubenswrapper[4858]: I0202 17:33:57.576765 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/319e3f38-af96-4ac6-9791-094f9a7d67ab-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"319e3f38-af96-4ac6-9791-094f9a7d67ab\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 17:33:57 crc kubenswrapper[4858]: I0202 17:33:57.580284 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/319e3f38-af96-4ac6-9791-094f9a7d67ab-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"319e3f38-af96-4ac6-9791-094f9a7d67ab\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 17:33:57 crc kubenswrapper[4858]: I0202 17:33:57.580289 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319e3f38-af96-4ac6-9791-094f9a7d67ab-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"319e3f38-af96-4ac6-9791-094f9a7d67ab\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 17:33:57 crc kubenswrapper[4858]: I0202 17:33:57.580623 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/319e3f38-af96-4ac6-9791-094f9a7d67ab-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"319e3f38-af96-4ac6-9791-094f9a7d67ab\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 17:33:57 crc kubenswrapper[4858]: I0202 17:33:57.581022 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/319e3f38-af96-4ac6-9791-094f9a7d67ab-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"319e3f38-af96-4ac6-9791-094f9a7d67ab\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 17:33:57 crc kubenswrapper[4858]: I0202 17:33:57.591938 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdnw6\" (UniqueName: \"kubernetes.io/projected/319e3f38-af96-4ac6-9791-094f9a7d67ab-kube-api-access-sdnw6\") pod \"nova-cell1-novncproxy-0\" (UID: \"319e3f38-af96-4ac6-9791-094f9a7d67ab\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 17:33:57 crc kubenswrapper[4858]: I0202 17:33:57.656557 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 17:33:58 crc kubenswrapper[4858]: I0202 17:33:58.221926 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 17:33:58 crc kubenswrapper[4858]: I0202 17:33:58.264805 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42d83da4-faa1-4e84-b536-77ee466428fb","Type":"ContainerStarted","Data":"aee21e7fca17f75931811074f5257607b3c98f530d3a174931e85ddb1f27b9c3"} Feb 02 17:33:58 crc kubenswrapper[4858]: I0202 17:33:58.268464 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"319e3f38-af96-4ac6-9791-094f9a7d67ab","Type":"ContainerStarted","Data":"6d87c4bfdd3ce28db57bc99b83c8f0e03625d5158edffca29faf8f0c45cd148a"} Feb 02 17:33:58 crc kubenswrapper[4858]: I0202 17:33:58.411135 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa9cda80-8059-476f-a7f3-710bb907548f" path="/var/lib/kubelet/pods/aa9cda80-8059-476f-a7f3-710bb907548f/volumes" Feb 02 17:33:58 crc kubenswrapper[4858]: I0202 17:33:58.411688 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab3475a4-dc06-489e-a4de-ac9a204c5248" path="/var/lib/kubelet/pods/ab3475a4-dc06-489e-a4de-ac9a204c5248/volumes" Feb 02 17:33:58 crc kubenswrapper[4858]: I0202 17:33:58.494079 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 17:33:58 crc kubenswrapper[4858]: I0202 17:33:58.495082 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 17:33:58 crc kubenswrapper[4858]: I0202 17:33:58.495893 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 17:33:58 crc kubenswrapper[4858]: I0202 17:33:58.500926 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 17:33:59 crc kubenswrapper[4858]: I0202 17:33:59.297540 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"319e3f38-af96-4ac6-9791-094f9a7d67ab","Type":"ContainerStarted","Data":"3c809b25ca934364db63e1ac02ee5022fe37f0d89b232c6a893417e4a4e62aa9"} Feb 02 17:33:59 crc kubenswrapper[4858]: I0202 17:33:59.309912 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42d83da4-faa1-4e84-b536-77ee466428fb","Type":"ContainerStarted","Data":"2396fc8513c5a7ec1bc1bb7157e4dd8c317ec5571a89dbc39c42baa89d9c5558"} Feb 02 17:33:59 crc kubenswrapper[4858]: I0202 17:33:59.309991 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 17:33:59 crc kubenswrapper[4858]: I0202 17:33:59.310005 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42d83da4-faa1-4e84-b536-77ee466428fb","Type":"ContainerStarted","Data":"41a97d2ca6ee350d93d0d0a8ef535d2ae4b28bb431ddefd3c060324e6248cc40"} Feb 02 17:33:59 crc kubenswrapper[4858]: I0202 17:33:59.324285 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 17:33:59 crc kubenswrapper[4858]: I0202 17:33:59.326847 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.326824638 podStartE2EDuration="2.326824638s" podCreationTimestamp="2026-02-02 17:33:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:33:59.31698553 +0000 UTC m=+1140.469400795" watchObservedRunningTime="2026-02-02 17:33:59.326824638 +0000 UTC m=+1140.479239903" Feb 02 17:33:59 crc kubenswrapper[4858]: I0202 17:33:59.542789 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-8hchg"] Feb 02 17:33:59 crc kubenswrapper[4858]: I0202 17:33:59.546079 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-8hchg" Feb 02 17:33:59 crc kubenswrapper[4858]: I0202 17:33:59.559566 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-8hchg"] Feb 02 17:33:59 crc kubenswrapper[4858]: I0202 17:33:59.732457 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3bdf9361-19a2-4c6d-a909-6c50d53e5d76-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-8hchg\" (UID: \"3bdf9361-19a2-4c6d-a909-6c50d53e5d76\") " pod="openstack/dnsmasq-dns-89c5cd4d5-8hchg" Feb 02 17:33:59 crc kubenswrapper[4858]: I0202 17:33:59.732509 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jv82\" (UniqueName: \"kubernetes.io/projected/3bdf9361-19a2-4c6d-a909-6c50d53e5d76-kube-api-access-5jv82\") pod \"dnsmasq-dns-89c5cd4d5-8hchg\" (UID: \"3bdf9361-19a2-4c6d-a909-6c50d53e5d76\") " pod="openstack/dnsmasq-dns-89c5cd4d5-8hchg" Feb 02 17:33:59 crc kubenswrapper[4858]: I0202 17:33:59.732538 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3bdf9361-19a2-4c6d-a909-6c50d53e5d76-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-8hchg\" (UID: \"3bdf9361-19a2-4c6d-a909-6c50d53e5d76\") " pod="openstack/dnsmasq-dns-89c5cd4d5-8hchg" Feb 02 17:33:59 crc kubenswrapper[4858]: I0202 17:33:59.732556 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bdf9361-19a2-4c6d-a909-6c50d53e5d76-config\") pod \"dnsmasq-dns-89c5cd4d5-8hchg\" (UID: \"3bdf9361-19a2-4c6d-a909-6c50d53e5d76\") " pod="openstack/dnsmasq-dns-89c5cd4d5-8hchg" Feb 02 17:33:59 crc kubenswrapper[4858]: I0202 17:33:59.732597 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3bdf9361-19a2-4c6d-a909-6c50d53e5d76-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-8hchg\" (UID: \"3bdf9361-19a2-4c6d-a909-6c50d53e5d76\") " pod="openstack/dnsmasq-dns-89c5cd4d5-8hchg" Feb 02 17:33:59 crc kubenswrapper[4858]: I0202 17:33:59.732673 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3bdf9361-19a2-4c6d-a909-6c50d53e5d76-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-8hchg\" (UID: \"3bdf9361-19a2-4c6d-a909-6c50d53e5d76\") " pod="openstack/dnsmasq-dns-89c5cd4d5-8hchg" Feb 02 17:33:59 crc kubenswrapper[4858]: I0202 17:33:59.834774 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3bdf9361-19a2-4c6d-a909-6c50d53e5d76-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-8hchg\" (UID: \"3bdf9361-19a2-4c6d-a909-6c50d53e5d76\") " pod="openstack/dnsmasq-dns-89c5cd4d5-8hchg" Feb 02 17:33:59 crc kubenswrapper[4858]: I0202 17:33:59.835437 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3bdf9361-19a2-4c6d-a909-6c50d53e5d76-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-8hchg\" (UID: \"3bdf9361-19a2-4c6d-a909-6c50d53e5d76\") " pod="openstack/dnsmasq-dns-89c5cd4d5-8hchg" Feb 02 17:33:59 crc kubenswrapper[4858]: I0202 17:33:59.835627 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3bdf9361-19a2-4c6d-a909-6c50d53e5d76-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-8hchg\" (UID: \"3bdf9361-19a2-4c6d-a909-6c50d53e5d76\") " pod="openstack/dnsmasq-dns-89c5cd4d5-8hchg" Feb 02 17:33:59 crc kubenswrapper[4858]: I0202 17:33:59.835727 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jv82\" (UniqueName: \"kubernetes.io/projected/3bdf9361-19a2-4c6d-a909-6c50d53e5d76-kube-api-access-5jv82\") pod \"dnsmasq-dns-89c5cd4d5-8hchg\" (UID: \"3bdf9361-19a2-4c6d-a909-6c50d53e5d76\") " pod="openstack/dnsmasq-dns-89c5cd4d5-8hchg" Feb 02 17:33:59 crc kubenswrapper[4858]: I0202 17:33:59.835836 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3bdf9361-19a2-4c6d-a909-6c50d53e5d76-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-8hchg\" (UID: \"3bdf9361-19a2-4c6d-a909-6c50d53e5d76\") " pod="openstack/dnsmasq-dns-89c5cd4d5-8hchg" Feb 02 17:33:59 crc kubenswrapper[4858]: I0202 17:33:59.835945 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bdf9361-19a2-4c6d-a909-6c50d53e5d76-config\") pod \"dnsmasq-dns-89c5cd4d5-8hchg\" (UID: \"3bdf9361-19a2-4c6d-a909-6c50d53e5d76\") " pod="openstack/dnsmasq-dns-89c5cd4d5-8hchg" Feb 02 17:33:59 crc kubenswrapper[4858]: I0202 17:33:59.836220 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3bdf9361-19a2-4c6d-a909-6c50d53e5d76-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-8hchg\" (UID: \"3bdf9361-19a2-4c6d-a909-6c50d53e5d76\") " pod="openstack/dnsmasq-dns-89c5cd4d5-8hchg" Feb 02 17:33:59 crc kubenswrapper[4858]: I0202 17:33:59.837023 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bdf9361-19a2-4c6d-a909-6c50d53e5d76-config\") pod \"dnsmasq-dns-89c5cd4d5-8hchg\" (UID: \"3bdf9361-19a2-4c6d-a909-6c50d53e5d76\") " pod="openstack/dnsmasq-dns-89c5cd4d5-8hchg" Feb 02 17:33:59 crc kubenswrapper[4858]: I0202 17:33:59.835724 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3bdf9361-19a2-4c6d-a909-6c50d53e5d76-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-8hchg\" (UID: \"3bdf9361-19a2-4c6d-a909-6c50d53e5d76\") " pod="openstack/dnsmasq-dns-89c5cd4d5-8hchg" Feb 02 17:33:59 crc kubenswrapper[4858]: I0202 17:33:59.837171 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3bdf9361-19a2-4c6d-a909-6c50d53e5d76-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-8hchg\" (UID: \"3bdf9361-19a2-4c6d-a909-6c50d53e5d76\") " pod="openstack/dnsmasq-dns-89c5cd4d5-8hchg" Feb 02 17:33:59 crc kubenswrapper[4858]: I0202 17:33:59.837858 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3bdf9361-19a2-4c6d-a909-6c50d53e5d76-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-8hchg\" (UID: \"3bdf9361-19a2-4c6d-a909-6c50d53e5d76\") " pod="openstack/dnsmasq-dns-89c5cd4d5-8hchg" Feb 02 17:33:59 crc kubenswrapper[4858]: I0202 17:33:59.857002 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jv82\" (UniqueName: \"kubernetes.io/projected/3bdf9361-19a2-4c6d-a909-6c50d53e5d76-kube-api-access-5jv82\") pod \"dnsmasq-dns-89c5cd4d5-8hchg\" (UID: \"3bdf9361-19a2-4c6d-a909-6c50d53e5d76\") " pod="openstack/dnsmasq-dns-89c5cd4d5-8hchg" Feb 02 17:33:59 crc kubenswrapper[4858]: I0202 17:33:59.967103 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-8hchg" Feb 02 17:34:00 crc kubenswrapper[4858]: I0202 17:34:00.318242 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42d83da4-faa1-4e84-b536-77ee466428fb","Type":"ContainerStarted","Data":"67c26dcc05557ee45d4911f6e51d2e1555c18cd2dd03d2e3113591b6b0bbade0"} Feb 02 17:34:00 crc kubenswrapper[4858]: I0202 17:34:00.450422 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-8hchg"] Feb 02 17:34:00 crc kubenswrapper[4858]: I0202 17:34:00.861030 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 02 17:34:01 crc kubenswrapper[4858]: I0202 17:34:01.328147 4858 generic.go:334] "Generic (PLEG): container finished" podID="3bdf9361-19a2-4c6d-a909-6c50d53e5d76" containerID="d73141e4e73332b85d5071e4381c5ee333f917aa8e1d1560143a8a8ac886cf62" exitCode=0 Feb 02 17:34:01 crc kubenswrapper[4858]: I0202 17:34:01.328214 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-8hchg" event={"ID":"3bdf9361-19a2-4c6d-a909-6c50d53e5d76","Type":"ContainerDied","Data":"d73141e4e73332b85d5071e4381c5ee333f917aa8e1d1560143a8a8ac886cf62"} Feb 02 17:34:01 crc kubenswrapper[4858]: I0202 17:34:01.328267 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-8hchg" event={"ID":"3bdf9361-19a2-4c6d-a909-6c50d53e5d76","Type":"ContainerStarted","Data":"7cac1979ccec50fbc838e07955e7265523313066458cdede9929b55c3042dc20"} Feb 02 17:34:02 crc kubenswrapper[4858]: I0202 17:34:02.338532 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42d83da4-faa1-4e84-b536-77ee466428fb","Type":"ContainerStarted","Data":"c5be747afe60e49e7be18ed8ad18a0de8c0e622eb98de15230350901e25740b9"} Feb 02 17:34:02 crc kubenswrapper[4858]: I0202 17:34:02.339956 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 17:34:02 crc kubenswrapper[4858]: I0202 17:34:02.341564 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-8hchg" event={"ID":"3bdf9361-19a2-4c6d-a909-6c50d53e5d76","Type":"ContainerStarted","Data":"749dc2e33b3990e97f669fcd14a8bec84f89aa688a0fc50736070f6756f2197f"} Feb 02 17:34:02 crc kubenswrapper[4858]: I0202 17:34:02.342478 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-8hchg" Feb 02 17:34:02 crc kubenswrapper[4858]: I0202 17:34:02.411569 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-8hchg" podStartSLOduration=3.411548556 podStartE2EDuration="3.411548556s" podCreationTimestamp="2026-02-02 17:33:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:34:02.400895714 +0000 UTC m=+1143.553310989" watchObservedRunningTime="2026-02-02 17:34:02.411548556 +0000 UTC m=+1143.563963821" Feb 02 17:34:02 crc kubenswrapper[4858]: I0202 17:34:02.412618 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.855010251 podStartE2EDuration="6.412611006s" podCreationTimestamp="2026-02-02 17:33:56 +0000 UTC" firstStartedPulling="2026-02-02 17:33:57.38425871 +0000 UTC m=+1138.536673975" lastFinishedPulling="2026-02-02 17:34:01.941859465 +0000 UTC m=+1143.094274730" observedRunningTime="2026-02-02 17:34:02.385686254 +0000 UTC m=+1143.538101539" watchObservedRunningTime="2026-02-02 17:34:02.412611006 +0000 UTC m=+1143.565026271" Feb 02 17:34:02 crc kubenswrapper[4858]: I0202 17:34:02.503862 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 17:34:02 crc kubenswrapper[4858]: I0202 17:34:02.504104 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="79296b02-fa25-434c-ab5a-6ebb9a116de2" containerName="nova-api-log" containerID="cri-o://ca2030dde18fb80d4d7314dc521f46b6ee88aa6f5114216bd3c52f1b73b4c293" gracePeriod=30 Feb 02 17:34:02 crc kubenswrapper[4858]: I0202 17:34:02.504397 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="79296b02-fa25-434c-ab5a-6ebb9a116de2" containerName="nova-api-api" containerID="cri-o://b11884c5550ac9f923693ddd95c3ee6d4e6a6f50240f72c72f01ceddde9ef4cb" gracePeriod=30 Feb 02 17:34:02 crc kubenswrapper[4858]: I0202 17:34:02.657451 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 02 17:34:02 crc kubenswrapper[4858]: I0202 17:34:02.826800 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:34:03 crc kubenswrapper[4858]: I0202 17:34:03.368436 4858 generic.go:334] "Generic (PLEG): container finished" podID="79296b02-fa25-434c-ab5a-6ebb9a116de2" containerID="ca2030dde18fb80d4d7314dc521f46b6ee88aa6f5114216bd3c52f1b73b4c293" exitCode=143 Feb 02 17:34:03 crc kubenswrapper[4858]: I0202 17:34:03.368736 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"79296b02-fa25-434c-ab5a-6ebb9a116de2","Type":"ContainerDied","Data":"ca2030dde18fb80d4d7314dc521f46b6ee88aa6f5114216bd3c52f1b73b4c293"} Feb 02 17:34:04 crc kubenswrapper[4858]: I0202 17:34:04.386447 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="42d83da4-faa1-4e84-b536-77ee466428fb" containerName="ceilometer-central-agent" containerID="cri-o://41a97d2ca6ee350d93d0d0a8ef535d2ae4b28bb431ddefd3c060324e6248cc40" gracePeriod=30 Feb 02 17:34:04 crc kubenswrapper[4858]: I0202 17:34:04.387462 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="42d83da4-faa1-4e84-b536-77ee466428fb" containerName="sg-core" containerID="cri-o://67c26dcc05557ee45d4911f6e51d2e1555c18cd2dd03d2e3113591b6b0bbade0" gracePeriod=30 Feb 02 17:34:04 crc kubenswrapper[4858]: I0202 17:34:04.387589 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="42d83da4-faa1-4e84-b536-77ee466428fb" containerName="proxy-httpd" containerID="cri-o://c5be747afe60e49e7be18ed8ad18a0de8c0e622eb98de15230350901e25740b9" gracePeriod=30 Feb 02 17:34:04 crc kubenswrapper[4858]: I0202 17:34:04.387639 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="42d83da4-faa1-4e84-b536-77ee466428fb" containerName="ceilometer-notification-agent" containerID="cri-o://2396fc8513c5a7ec1bc1bb7157e4dd8c317ec5571a89dbc39c42baa89d9c5558" gracePeriod=30 Feb 02 17:34:05 crc kubenswrapper[4858]: I0202 17:34:05.399190 4858 generic.go:334] "Generic (PLEG): container finished" podID="42d83da4-faa1-4e84-b536-77ee466428fb" containerID="c5be747afe60e49e7be18ed8ad18a0de8c0e622eb98de15230350901e25740b9" exitCode=0 Feb 02 17:34:05 crc kubenswrapper[4858]: I0202 17:34:05.399562 4858 generic.go:334] "Generic (PLEG): container finished" podID="42d83da4-faa1-4e84-b536-77ee466428fb" containerID="67c26dcc05557ee45d4911f6e51d2e1555c18cd2dd03d2e3113591b6b0bbade0" exitCode=2 Feb 02 17:34:05 crc kubenswrapper[4858]: I0202 17:34:05.399575 4858 generic.go:334] "Generic (PLEG): container finished" podID="42d83da4-faa1-4e84-b536-77ee466428fb" containerID="2396fc8513c5a7ec1bc1bb7157e4dd8c317ec5571a89dbc39c42baa89d9c5558" exitCode=0 Feb 02 17:34:05 crc kubenswrapper[4858]: I0202 17:34:05.399259 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42d83da4-faa1-4e84-b536-77ee466428fb","Type":"ContainerDied","Data":"c5be747afe60e49e7be18ed8ad18a0de8c0e622eb98de15230350901e25740b9"} Feb 02 17:34:05 crc kubenswrapper[4858]: I0202 17:34:05.399614 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42d83da4-faa1-4e84-b536-77ee466428fb","Type":"ContainerDied","Data":"67c26dcc05557ee45d4911f6e51d2e1555c18cd2dd03d2e3113591b6b0bbade0"} Feb 02 17:34:05 crc kubenswrapper[4858]: I0202 17:34:05.399628 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42d83da4-faa1-4e84-b536-77ee466428fb","Type":"ContainerDied","Data":"2396fc8513c5a7ec1bc1bb7157e4dd8c317ec5571a89dbc39c42baa89d9c5558"} Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.200449 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.371704 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79296b02-fa25-434c-ab5a-6ebb9a116de2-logs\") pod \"79296b02-fa25-434c-ab5a-6ebb9a116de2\" (UID: \"79296b02-fa25-434c-ab5a-6ebb9a116de2\") " Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.371846 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm7rn\" (UniqueName: \"kubernetes.io/projected/79296b02-fa25-434c-ab5a-6ebb9a116de2-kube-api-access-wm7rn\") pod \"79296b02-fa25-434c-ab5a-6ebb9a116de2\" (UID: \"79296b02-fa25-434c-ab5a-6ebb9a116de2\") " Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.371904 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79296b02-fa25-434c-ab5a-6ebb9a116de2-combined-ca-bundle\") pod \"79296b02-fa25-434c-ab5a-6ebb9a116de2\" (UID: \"79296b02-fa25-434c-ab5a-6ebb9a116de2\") " Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.371988 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79296b02-fa25-434c-ab5a-6ebb9a116de2-config-data\") pod \"79296b02-fa25-434c-ab5a-6ebb9a116de2\" (UID: \"79296b02-fa25-434c-ab5a-6ebb9a116de2\") " Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.374788 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79296b02-fa25-434c-ab5a-6ebb9a116de2-logs" (OuterVolumeSpecName: "logs") pod "79296b02-fa25-434c-ab5a-6ebb9a116de2" (UID: "79296b02-fa25-434c-ab5a-6ebb9a116de2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.400726 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79296b02-fa25-434c-ab5a-6ebb9a116de2-kube-api-access-wm7rn" (OuterVolumeSpecName: "kube-api-access-wm7rn") pod "79296b02-fa25-434c-ab5a-6ebb9a116de2" (UID: "79296b02-fa25-434c-ab5a-6ebb9a116de2"). InnerVolumeSpecName "kube-api-access-wm7rn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.410681 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79296b02-fa25-434c-ab5a-6ebb9a116de2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79296b02-fa25-434c-ab5a-6ebb9a116de2" (UID: "79296b02-fa25-434c-ab5a-6ebb9a116de2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.416563 4858 generic.go:334] "Generic (PLEG): container finished" podID="79296b02-fa25-434c-ab5a-6ebb9a116de2" containerID="b11884c5550ac9f923693ddd95c3ee6d4e6a6f50240f72c72f01ceddde9ef4cb" exitCode=0 Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.416679 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.423050 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79296b02-fa25-434c-ab5a-6ebb9a116de2-config-data" (OuterVolumeSpecName: "config-data") pod "79296b02-fa25-434c-ab5a-6ebb9a116de2" (UID: "79296b02-fa25-434c-ab5a-6ebb9a116de2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.423270 4858 generic.go:334] "Generic (PLEG): container finished" podID="42d83da4-faa1-4e84-b536-77ee466428fb" containerID="41a97d2ca6ee350d93d0d0a8ef535d2ae4b28bb431ddefd3c060324e6248cc40" exitCode=0 Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.476365 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79296b02-fa25-434c-ab5a-6ebb9a116de2-logs\") on node \"crc\" DevicePath \"\"" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.476399 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wm7rn\" (UniqueName: \"kubernetes.io/projected/79296b02-fa25-434c-ab5a-6ebb9a116de2-kube-api-access-wm7rn\") on node \"crc\" DevicePath \"\"" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.476409 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79296b02-fa25-434c-ab5a-6ebb9a116de2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.476419 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79296b02-fa25-434c-ab5a-6ebb9a116de2-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.497811 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"79296b02-fa25-434c-ab5a-6ebb9a116de2","Type":"ContainerDied","Data":"b11884c5550ac9f923693ddd95c3ee6d4e6a6f50240f72c72f01ceddde9ef4cb"} Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.497852 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"79296b02-fa25-434c-ab5a-6ebb9a116de2","Type":"ContainerDied","Data":"e5d5e303223044cd6a857f8f95ffd0c5f1d3729ed6f9a62a39b9884531b4bb5b"} Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.497867 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42d83da4-faa1-4e84-b536-77ee466428fb","Type":"ContainerDied","Data":"41a97d2ca6ee350d93d0d0a8ef535d2ae4b28bb431ddefd3c060324e6248cc40"} Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.497885 4858 scope.go:117] "RemoveContainer" containerID="b11884c5550ac9f923693ddd95c3ee6d4e6a6f50240f72c72f01ceddde9ef4cb" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.510253 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.529528 4858 scope.go:117] "RemoveContainer" containerID="ca2030dde18fb80d4d7314dc521f46b6ee88aa6f5114216bd3c52f1b73b4c293" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.572342 4858 scope.go:117] "RemoveContainer" containerID="b11884c5550ac9f923693ddd95c3ee6d4e6a6f50240f72c72f01ceddde9ef4cb" Feb 02 17:34:06 crc kubenswrapper[4858]: E0202 17:34:06.572837 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b11884c5550ac9f923693ddd95c3ee6d4e6a6f50240f72c72f01ceddde9ef4cb\": container with ID starting with b11884c5550ac9f923693ddd95c3ee6d4e6a6f50240f72c72f01ceddde9ef4cb not found: ID does not exist" containerID="b11884c5550ac9f923693ddd95c3ee6d4e6a6f50240f72c72f01ceddde9ef4cb" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.572913 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b11884c5550ac9f923693ddd95c3ee6d4e6a6f50240f72c72f01ceddde9ef4cb"} err="failed to get container status \"b11884c5550ac9f923693ddd95c3ee6d4e6a6f50240f72c72f01ceddde9ef4cb\": rpc error: code = NotFound desc = could not find container \"b11884c5550ac9f923693ddd95c3ee6d4e6a6f50240f72c72f01ceddde9ef4cb\": container with ID starting with b11884c5550ac9f923693ddd95c3ee6d4e6a6f50240f72c72f01ceddde9ef4cb not found: ID does not exist" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.572947 4858 scope.go:117] "RemoveContainer" containerID="ca2030dde18fb80d4d7314dc521f46b6ee88aa6f5114216bd3c52f1b73b4c293" Feb 02 17:34:06 crc kubenswrapper[4858]: E0202 17:34:06.573739 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca2030dde18fb80d4d7314dc521f46b6ee88aa6f5114216bd3c52f1b73b4c293\": container with ID starting with ca2030dde18fb80d4d7314dc521f46b6ee88aa6f5114216bd3c52f1b73b4c293 not found: ID does not exist" containerID="ca2030dde18fb80d4d7314dc521f46b6ee88aa6f5114216bd3c52f1b73b4c293" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.573772 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca2030dde18fb80d4d7314dc521f46b6ee88aa6f5114216bd3c52f1b73b4c293"} err="failed to get container status \"ca2030dde18fb80d4d7314dc521f46b6ee88aa6f5114216bd3c52f1b73b4c293\": rpc error: code = NotFound desc = could not find container \"ca2030dde18fb80d4d7314dc521f46b6ee88aa6f5114216bd3c52f1b73b4c293\": container with ID starting with ca2030dde18fb80d4d7314dc521f46b6ee88aa6f5114216bd3c52f1b73b4c293 not found: ID does not exist" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.679159 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d83da4-faa1-4e84-b536-77ee466428fb-combined-ca-bundle\") pod \"42d83da4-faa1-4e84-b536-77ee466428fb\" (UID: \"42d83da4-faa1-4e84-b536-77ee466428fb\") " Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.679231 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42d83da4-faa1-4e84-b536-77ee466428fb-run-httpd\") pod \"42d83da4-faa1-4e84-b536-77ee466428fb\" (UID: \"42d83da4-faa1-4e84-b536-77ee466428fb\") " Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.679260 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/42d83da4-faa1-4e84-b536-77ee466428fb-sg-core-conf-yaml\") pod \"42d83da4-faa1-4e84-b536-77ee466428fb\" (UID: \"42d83da4-faa1-4e84-b536-77ee466428fb\") " Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.679297 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/42d83da4-faa1-4e84-b536-77ee466428fb-ceilometer-tls-certs\") pod \"42d83da4-faa1-4e84-b536-77ee466428fb\" (UID: \"42d83da4-faa1-4e84-b536-77ee466428fb\") " Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.679344 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42d83da4-faa1-4e84-b536-77ee466428fb-config-data\") pod \"42d83da4-faa1-4e84-b536-77ee466428fb\" (UID: \"42d83da4-faa1-4e84-b536-77ee466428fb\") " Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.679359 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42d83da4-faa1-4e84-b536-77ee466428fb-scripts\") pod \"42d83da4-faa1-4e84-b536-77ee466428fb\" (UID: \"42d83da4-faa1-4e84-b536-77ee466428fb\") " Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.679423 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gckv\" (UniqueName: \"kubernetes.io/projected/42d83da4-faa1-4e84-b536-77ee466428fb-kube-api-access-5gckv\") pod \"42d83da4-faa1-4e84-b536-77ee466428fb\" (UID: \"42d83da4-faa1-4e84-b536-77ee466428fb\") " Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.679444 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42d83da4-faa1-4e84-b536-77ee466428fb-log-httpd\") pod \"42d83da4-faa1-4e84-b536-77ee466428fb\" (UID: \"42d83da4-faa1-4e84-b536-77ee466428fb\") " Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.679539 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42d83da4-faa1-4e84-b536-77ee466428fb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "42d83da4-faa1-4e84-b536-77ee466428fb" (UID: "42d83da4-faa1-4e84-b536-77ee466428fb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.679874 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42d83da4-faa1-4e84-b536-77ee466428fb-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.680051 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42d83da4-faa1-4e84-b536-77ee466428fb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "42d83da4-faa1-4e84-b536-77ee466428fb" (UID: "42d83da4-faa1-4e84-b536-77ee466428fb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.684172 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42d83da4-faa1-4e84-b536-77ee466428fb-scripts" (OuterVolumeSpecName: "scripts") pod "42d83da4-faa1-4e84-b536-77ee466428fb" (UID: "42d83da4-faa1-4e84-b536-77ee466428fb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.684412 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42d83da4-faa1-4e84-b536-77ee466428fb-kube-api-access-5gckv" (OuterVolumeSpecName: "kube-api-access-5gckv") pod "42d83da4-faa1-4e84-b536-77ee466428fb" (UID: "42d83da4-faa1-4e84-b536-77ee466428fb"). InnerVolumeSpecName "kube-api-access-5gckv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.708679 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42d83da4-faa1-4e84-b536-77ee466428fb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "42d83da4-faa1-4e84-b536-77ee466428fb" (UID: "42d83da4-faa1-4e84-b536-77ee466428fb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.739137 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42d83da4-faa1-4e84-b536-77ee466428fb-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "42d83da4-faa1-4e84-b536-77ee466428fb" (UID: "42d83da4-faa1-4e84-b536-77ee466428fb"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.767760 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42d83da4-faa1-4e84-b536-77ee466428fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42d83da4-faa1-4e84-b536-77ee466428fb" (UID: "42d83da4-faa1-4e84-b536-77ee466428fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.781633 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d83da4-faa1-4e84-b536-77ee466428fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.781694 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/42d83da4-faa1-4e84-b536-77ee466428fb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.781709 4858 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/42d83da4-faa1-4e84-b536-77ee466428fb-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.781722 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42d83da4-faa1-4e84-b536-77ee466428fb-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.781735 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gckv\" (UniqueName: \"kubernetes.io/projected/42d83da4-faa1-4e84-b536-77ee466428fb-kube-api-access-5gckv\") on node \"crc\" DevicePath \"\"" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.781746 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42d83da4-faa1-4e84-b536-77ee466428fb-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.786442 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42d83da4-faa1-4e84-b536-77ee466428fb-config-data" (OuterVolumeSpecName: "config-data") pod "42d83da4-faa1-4e84-b536-77ee466428fb" (UID: "42d83da4-faa1-4e84-b536-77ee466428fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.882919 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42d83da4-faa1-4e84-b536-77ee466428fb-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.902457 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.910856 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.926195 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 17:34:06 crc kubenswrapper[4858]: E0202 17:34:06.926550 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42d83da4-faa1-4e84-b536-77ee466428fb" containerName="sg-core" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.926567 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="42d83da4-faa1-4e84-b536-77ee466428fb" containerName="sg-core" Feb 02 17:34:06 crc kubenswrapper[4858]: E0202 17:34:06.926584 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42d83da4-faa1-4e84-b536-77ee466428fb" containerName="proxy-httpd" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.926593 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="42d83da4-faa1-4e84-b536-77ee466428fb" containerName="proxy-httpd" Feb 02 17:34:06 crc kubenswrapper[4858]: E0202 17:34:06.926609 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42d83da4-faa1-4e84-b536-77ee466428fb" containerName="ceilometer-notification-agent" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.926616 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="42d83da4-faa1-4e84-b536-77ee466428fb" containerName="ceilometer-notification-agent" Feb 02 17:34:06 crc kubenswrapper[4858]: E0202 17:34:06.926638 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79296b02-fa25-434c-ab5a-6ebb9a116de2" containerName="nova-api-api" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.926645 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="79296b02-fa25-434c-ab5a-6ebb9a116de2" containerName="nova-api-api" Feb 02 17:34:06 crc kubenswrapper[4858]: E0202 17:34:06.926664 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79296b02-fa25-434c-ab5a-6ebb9a116de2" containerName="nova-api-log" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.926671 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="79296b02-fa25-434c-ab5a-6ebb9a116de2" containerName="nova-api-log" Feb 02 17:34:06 crc kubenswrapper[4858]: E0202 17:34:06.926688 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42d83da4-faa1-4e84-b536-77ee466428fb" containerName="ceilometer-central-agent" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.926696 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="42d83da4-faa1-4e84-b536-77ee466428fb" containerName="ceilometer-central-agent" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.926856 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="42d83da4-faa1-4e84-b536-77ee466428fb" containerName="sg-core" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.926879 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="42d83da4-faa1-4e84-b536-77ee466428fb" containerName="proxy-httpd" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.926892 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="79296b02-fa25-434c-ab5a-6ebb9a116de2" containerName="nova-api-api" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.926900 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="79296b02-fa25-434c-ab5a-6ebb9a116de2" containerName="nova-api-log" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.926906 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="42d83da4-faa1-4e84-b536-77ee466428fb" containerName="ceilometer-notification-agent" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.926915 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="42d83da4-faa1-4e84-b536-77ee466428fb" containerName="ceilometer-central-agent" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.927841 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.933193 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.933317 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.933421 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 02 17:34:06 crc kubenswrapper[4858]: I0202 17:34:06.940504 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.086490 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/79e199b0-71cf-47cb-bca9-bf04bbea5785-internal-tls-certs\") pod \"nova-api-0\" (UID: \"79e199b0-71cf-47cb-bca9-bf04bbea5785\") " pod="openstack/nova-api-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.086598 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/79e199b0-71cf-47cb-bca9-bf04bbea5785-public-tls-certs\") pod \"nova-api-0\" (UID: \"79e199b0-71cf-47cb-bca9-bf04bbea5785\") " pod="openstack/nova-api-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.086673 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79e199b0-71cf-47cb-bca9-bf04bbea5785-logs\") pod \"nova-api-0\" (UID: \"79e199b0-71cf-47cb-bca9-bf04bbea5785\") " pod="openstack/nova-api-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.086821 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79e199b0-71cf-47cb-bca9-bf04bbea5785-config-data\") pod \"nova-api-0\" (UID: \"79e199b0-71cf-47cb-bca9-bf04bbea5785\") " pod="openstack/nova-api-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.086875 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5kxt\" (UniqueName: \"kubernetes.io/projected/79e199b0-71cf-47cb-bca9-bf04bbea5785-kube-api-access-w5kxt\") pod \"nova-api-0\" (UID: \"79e199b0-71cf-47cb-bca9-bf04bbea5785\") " pod="openstack/nova-api-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.086995 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e199b0-71cf-47cb-bca9-bf04bbea5785-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"79e199b0-71cf-47cb-bca9-bf04bbea5785\") " pod="openstack/nova-api-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.189018 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79e199b0-71cf-47cb-bca9-bf04bbea5785-config-data\") pod \"nova-api-0\" (UID: \"79e199b0-71cf-47cb-bca9-bf04bbea5785\") " pod="openstack/nova-api-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.189074 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5kxt\" (UniqueName: \"kubernetes.io/projected/79e199b0-71cf-47cb-bca9-bf04bbea5785-kube-api-access-w5kxt\") pod \"nova-api-0\" (UID: \"79e199b0-71cf-47cb-bca9-bf04bbea5785\") " pod="openstack/nova-api-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.189132 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e199b0-71cf-47cb-bca9-bf04bbea5785-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"79e199b0-71cf-47cb-bca9-bf04bbea5785\") " pod="openstack/nova-api-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.189167 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/79e199b0-71cf-47cb-bca9-bf04bbea5785-internal-tls-certs\") pod \"nova-api-0\" (UID: \"79e199b0-71cf-47cb-bca9-bf04bbea5785\") " pod="openstack/nova-api-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.189197 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/79e199b0-71cf-47cb-bca9-bf04bbea5785-public-tls-certs\") pod \"nova-api-0\" (UID: \"79e199b0-71cf-47cb-bca9-bf04bbea5785\") " pod="openstack/nova-api-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.189247 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79e199b0-71cf-47cb-bca9-bf04bbea5785-logs\") pod \"nova-api-0\" (UID: \"79e199b0-71cf-47cb-bca9-bf04bbea5785\") " pod="openstack/nova-api-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.189705 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79e199b0-71cf-47cb-bca9-bf04bbea5785-logs\") pod \"nova-api-0\" (UID: \"79e199b0-71cf-47cb-bca9-bf04bbea5785\") " pod="openstack/nova-api-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.193283 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e199b0-71cf-47cb-bca9-bf04bbea5785-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"79e199b0-71cf-47cb-bca9-bf04bbea5785\") " pod="openstack/nova-api-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.193475 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/79e199b0-71cf-47cb-bca9-bf04bbea5785-internal-tls-certs\") pod \"nova-api-0\" (UID: \"79e199b0-71cf-47cb-bca9-bf04bbea5785\") " pod="openstack/nova-api-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.193769 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/79e199b0-71cf-47cb-bca9-bf04bbea5785-public-tls-certs\") pod \"nova-api-0\" (UID: \"79e199b0-71cf-47cb-bca9-bf04bbea5785\") " pod="openstack/nova-api-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.197061 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79e199b0-71cf-47cb-bca9-bf04bbea5785-config-data\") pod \"nova-api-0\" (UID: \"79e199b0-71cf-47cb-bca9-bf04bbea5785\") " pod="openstack/nova-api-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.207179 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5kxt\" (UniqueName: \"kubernetes.io/projected/79e199b0-71cf-47cb-bca9-bf04bbea5785-kube-api-access-w5kxt\") pod \"nova-api-0\" (UID: \"79e199b0-71cf-47cb-bca9-bf04bbea5785\") " pod="openstack/nova-api-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.270773 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.449835 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42d83da4-faa1-4e84-b536-77ee466428fb","Type":"ContainerDied","Data":"aee21e7fca17f75931811074f5257607b3c98f530d3a174931e85ddb1f27b9c3"} Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.450298 4858 scope.go:117] "RemoveContainer" containerID="c5be747afe60e49e7be18ed8ad18a0de8c0e622eb98de15230350901e25740b9" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.449900 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.473643 4858 scope.go:117] "RemoveContainer" containerID="67c26dcc05557ee45d4911f6e51d2e1555c18cd2dd03d2e3113591b6b0bbade0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.493721 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.509032 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.510188 4858 scope.go:117] "RemoveContainer" containerID="2396fc8513c5a7ec1bc1bb7157e4dd8c317ec5571a89dbc39c42baa89d9c5558" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.537763 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.545116 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.547530 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.547765 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.547934 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.548570 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.562080 4858 scope.go:117] "RemoveContainer" containerID="41a97d2ca6ee350d93d0d0a8ef535d2ae4b28bb431ddefd3c060324e6248cc40" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.656747 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.678847 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.698233 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32e8a9b4-688e-42b5-8562-23463e2632c1-config-data\") pod \"ceilometer-0\" (UID: \"32e8a9b4-688e-42b5-8562-23463e2632c1\") " pod="openstack/ceilometer-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.698313 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32e8a9b4-688e-42b5-8562-23463e2632c1-log-httpd\") pod \"ceilometer-0\" (UID: \"32e8a9b4-688e-42b5-8562-23463e2632c1\") " pod="openstack/ceilometer-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.698398 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32e8a9b4-688e-42b5-8562-23463e2632c1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"32e8a9b4-688e-42b5-8562-23463e2632c1\") " pod="openstack/ceilometer-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.698443 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32e8a9b4-688e-42b5-8562-23463e2632c1-run-httpd\") pod \"ceilometer-0\" (UID: \"32e8a9b4-688e-42b5-8562-23463e2632c1\") " pod="openstack/ceilometer-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.698679 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/32e8a9b4-688e-42b5-8562-23463e2632c1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"32e8a9b4-688e-42b5-8562-23463e2632c1\") " pod="openstack/ceilometer-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.698780 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/32e8a9b4-688e-42b5-8562-23463e2632c1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"32e8a9b4-688e-42b5-8562-23463e2632c1\") " pod="openstack/ceilometer-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.698831 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f98ts\" (UniqueName: \"kubernetes.io/projected/32e8a9b4-688e-42b5-8562-23463e2632c1-kube-api-access-f98ts\") pod \"ceilometer-0\" (UID: \"32e8a9b4-688e-42b5-8562-23463e2632c1\") " pod="openstack/ceilometer-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.698910 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32e8a9b4-688e-42b5-8562-23463e2632c1-scripts\") pod \"ceilometer-0\" (UID: \"32e8a9b4-688e-42b5-8562-23463e2632c1\") " pod="openstack/ceilometer-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.731653 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.800895 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32e8a9b4-688e-42b5-8562-23463e2632c1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"32e8a9b4-688e-42b5-8562-23463e2632c1\") " pod="openstack/ceilometer-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.800957 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32e8a9b4-688e-42b5-8562-23463e2632c1-run-httpd\") pod \"ceilometer-0\" (UID: \"32e8a9b4-688e-42b5-8562-23463e2632c1\") " pod="openstack/ceilometer-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.801040 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/32e8a9b4-688e-42b5-8562-23463e2632c1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"32e8a9b4-688e-42b5-8562-23463e2632c1\") " pod="openstack/ceilometer-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.801095 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/32e8a9b4-688e-42b5-8562-23463e2632c1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"32e8a9b4-688e-42b5-8562-23463e2632c1\") " pod="openstack/ceilometer-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.801520 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f98ts\" (UniqueName: \"kubernetes.io/projected/32e8a9b4-688e-42b5-8562-23463e2632c1-kube-api-access-f98ts\") pod \"ceilometer-0\" (UID: \"32e8a9b4-688e-42b5-8562-23463e2632c1\") " pod="openstack/ceilometer-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.801564 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32e8a9b4-688e-42b5-8562-23463e2632c1-scripts\") pod \"ceilometer-0\" (UID: \"32e8a9b4-688e-42b5-8562-23463e2632c1\") " pod="openstack/ceilometer-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.801589 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32e8a9b4-688e-42b5-8562-23463e2632c1-run-httpd\") pod \"ceilometer-0\" (UID: \"32e8a9b4-688e-42b5-8562-23463e2632c1\") " pod="openstack/ceilometer-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.801796 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32e8a9b4-688e-42b5-8562-23463e2632c1-config-data\") pod \"ceilometer-0\" (UID: \"32e8a9b4-688e-42b5-8562-23463e2632c1\") " pod="openstack/ceilometer-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.801865 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32e8a9b4-688e-42b5-8562-23463e2632c1-log-httpd\") pod \"ceilometer-0\" (UID: \"32e8a9b4-688e-42b5-8562-23463e2632c1\") " pod="openstack/ceilometer-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.802249 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32e8a9b4-688e-42b5-8562-23463e2632c1-log-httpd\") pod \"ceilometer-0\" (UID: \"32e8a9b4-688e-42b5-8562-23463e2632c1\") " pod="openstack/ceilometer-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.808096 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/32e8a9b4-688e-42b5-8562-23463e2632c1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"32e8a9b4-688e-42b5-8562-23463e2632c1\") " pod="openstack/ceilometer-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.808671 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32e8a9b4-688e-42b5-8562-23463e2632c1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"32e8a9b4-688e-42b5-8562-23463e2632c1\") " pod="openstack/ceilometer-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.808881 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32e8a9b4-688e-42b5-8562-23463e2632c1-scripts\") pod \"ceilometer-0\" (UID: \"32e8a9b4-688e-42b5-8562-23463e2632c1\") " pod="openstack/ceilometer-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.809023 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32e8a9b4-688e-42b5-8562-23463e2632c1-config-data\") pod \"ceilometer-0\" (UID: \"32e8a9b4-688e-42b5-8562-23463e2632c1\") " pod="openstack/ceilometer-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.811385 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/32e8a9b4-688e-42b5-8562-23463e2632c1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"32e8a9b4-688e-42b5-8562-23463e2632c1\") " pod="openstack/ceilometer-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.822097 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f98ts\" (UniqueName: \"kubernetes.io/projected/32e8a9b4-688e-42b5-8562-23463e2632c1-kube-api-access-f98ts\") pod \"ceilometer-0\" (UID: \"32e8a9b4-688e-42b5-8562-23463e2632c1\") " pod="openstack/ceilometer-0" Feb 02 17:34:07 crc kubenswrapper[4858]: I0202 17:34:07.862819 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 17:34:08 crc kubenswrapper[4858]: I0202 17:34:08.331178 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 17:34:08 crc kubenswrapper[4858]: W0202 17:34:08.337826 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32e8a9b4_688e_42b5_8562_23463e2632c1.slice/crio-8ea3306957de75c682cf07a9d811ffc06bed14143c4474d7ec915c9864e8c5c1 WatchSource:0}: Error finding container 8ea3306957de75c682cf07a9d811ffc06bed14143c4474d7ec915c9864e8c5c1: Status 404 returned error can't find the container with id 8ea3306957de75c682cf07a9d811ffc06bed14143c4474d7ec915c9864e8c5c1 Feb 02 17:34:08 crc kubenswrapper[4858]: I0202 17:34:08.416691 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42d83da4-faa1-4e84-b536-77ee466428fb" path="/var/lib/kubelet/pods/42d83da4-faa1-4e84-b536-77ee466428fb/volumes" Feb 02 17:34:08 crc kubenswrapper[4858]: I0202 17:34:08.417468 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79296b02-fa25-434c-ab5a-6ebb9a116de2" path="/var/lib/kubelet/pods/79296b02-fa25-434c-ab5a-6ebb9a116de2/volumes" Feb 02 17:34:08 crc kubenswrapper[4858]: I0202 17:34:08.466070 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32e8a9b4-688e-42b5-8562-23463e2632c1","Type":"ContainerStarted","Data":"8ea3306957de75c682cf07a9d811ffc06bed14143c4474d7ec915c9864e8c5c1"} Feb 02 17:34:08 crc kubenswrapper[4858]: I0202 17:34:08.467950 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"79e199b0-71cf-47cb-bca9-bf04bbea5785","Type":"ContainerStarted","Data":"91508f145ce93ca955e214b76b0ee3e2122328f13647edd38bac3a4a17c61386"} Feb 02 17:34:08 crc kubenswrapper[4858]: I0202 17:34:08.468011 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"79e199b0-71cf-47cb-bca9-bf04bbea5785","Type":"ContainerStarted","Data":"55d0a3eb9fdd2784e653dda5cee3ef9ce3a3c78d5c8184e3cfdbeb492004c4d4"} Feb 02 17:34:08 crc kubenswrapper[4858]: I0202 17:34:08.468026 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"79e199b0-71cf-47cb-bca9-bf04bbea5785","Type":"ContainerStarted","Data":"c91143642dc521c54c7ea42037a43a003b422005001ce52b3b19c435922ae2cf"} Feb 02 17:34:08 crc kubenswrapper[4858]: I0202 17:34:08.499457 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 02 17:34:08 crc kubenswrapper[4858]: I0202 17:34:08.500053 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.5000257 podStartE2EDuration="2.5000257s" podCreationTimestamp="2026-02-02 17:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:34:08.493738992 +0000 UTC m=+1149.646154267" watchObservedRunningTime="2026-02-02 17:34:08.5000257 +0000 UTC m=+1149.652440975" Feb 02 17:34:08 crc kubenswrapper[4858]: I0202 17:34:08.688916 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-dgbrn"] Feb 02 17:34:08 crc kubenswrapper[4858]: I0202 17:34:08.690598 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dgbrn" Feb 02 17:34:08 crc kubenswrapper[4858]: I0202 17:34:08.692820 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 02 17:34:08 crc kubenswrapper[4858]: I0202 17:34:08.693048 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 02 17:34:08 crc kubenswrapper[4858]: I0202 17:34:08.698929 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-dgbrn"] Feb 02 17:34:08 crc kubenswrapper[4858]: I0202 17:34:08.824118 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-dgbrn\" (UID: \"40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527\") " pod="openstack/nova-cell1-cell-mapping-dgbrn" Feb 02 17:34:08 crc kubenswrapper[4858]: I0202 17:34:08.824212 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8dpt\" (UniqueName: \"kubernetes.io/projected/40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527-kube-api-access-v8dpt\") pod \"nova-cell1-cell-mapping-dgbrn\" (UID: \"40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527\") " pod="openstack/nova-cell1-cell-mapping-dgbrn" Feb 02 17:34:08 crc kubenswrapper[4858]: I0202 17:34:08.824327 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527-config-data\") pod \"nova-cell1-cell-mapping-dgbrn\" (UID: \"40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527\") " pod="openstack/nova-cell1-cell-mapping-dgbrn" Feb 02 17:34:08 crc kubenswrapper[4858]: I0202 17:34:08.824355 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527-scripts\") pod \"nova-cell1-cell-mapping-dgbrn\" (UID: \"40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527\") " pod="openstack/nova-cell1-cell-mapping-dgbrn" Feb 02 17:34:08 crc kubenswrapper[4858]: I0202 17:34:08.926439 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8dpt\" (UniqueName: \"kubernetes.io/projected/40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527-kube-api-access-v8dpt\") pod \"nova-cell1-cell-mapping-dgbrn\" (UID: \"40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527\") " pod="openstack/nova-cell1-cell-mapping-dgbrn" Feb 02 17:34:08 crc kubenswrapper[4858]: I0202 17:34:08.926583 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527-config-data\") pod \"nova-cell1-cell-mapping-dgbrn\" (UID: \"40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527\") " pod="openstack/nova-cell1-cell-mapping-dgbrn" Feb 02 17:34:08 crc kubenswrapper[4858]: I0202 17:34:08.926615 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527-scripts\") pod \"nova-cell1-cell-mapping-dgbrn\" (UID: \"40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527\") " pod="openstack/nova-cell1-cell-mapping-dgbrn" Feb 02 17:34:08 crc kubenswrapper[4858]: I0202 17:34:08.926695 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-dgbrn\" (UID: \"40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527\") " pod="openstack/nova-cell1-cell-mapping-dgbrn" Feb 02 17:34:08 crc kubenswrapper[4858]: I0202 17:34:08.931322 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527-scripts\") pod \"nova-cell1-cell-mapping-dgbrn\" (UID: \"40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527\") " pod="openstack/nova-cell1-cell-mapping-dgbrn" Feb 02 17:34:08 crc kubenswrapper[4858]: I0202 17:34:08.931398 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527-config-data\") pod \"nova-cell1-cell-mapping-dgbrn\" (UID: \"40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527\") " pod="openstack/nova-cell1-cell-mapping-dgbrn" Feb 02 17:34:08 crc kubenswrapper[4858]: I0202 17:34:08.931667 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-dgbrn\" (UID: \"40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527\") " pod="openstack/nova-cell1-cell-mapping-dgbrn" Feb 02 17:34:08 crc kubenswrapper[4858]: I0202 17:34:08.941869 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8dpt\" (UniqueName: \"kubernetes.io/projected/40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527-kube-api-access-v8dpt\") pod \"nova-cell1-cell-mapping-dgbrn\" (UID: \"40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527\") " pod="openstack/nova-cell1-cell-mapping-dgbrn" Feb 02 17:34:09 crc kubenswrapper[4858]: I0202 17:34:09.010647 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dgbrn" Feb 02 17:34:09 crc kubenswrapper[4858]: I0202 17:34:09.478575 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-dgbrn"] Feb 02 17:34:09 crc kubenswrapper[4858]: I0202 17:34:09.496182 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32e8a9b4-688e-42b5-8562-23463e2632c1","Type":"ContainerStarted","Data":"700ca026679027d5881a69af3f8c23eca8159a16f4180a264cf3682a0cbd54c4"} Feb 02 17:34:09 crc kubenswrapper[4858]: I0202 17:34:09.969276 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-8hchg" Feb 02 17:34:10 crc kubenswrapper[4858]: I0202 17:34:10.027172 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-4548m"] Feb 02 17:34:10 crc kubenswrapper[4858]: I0202 17:34:10.027487 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-4548m" podUID="21d771a5-5ae8-4ed9-9572-6ff76bb713ec" containerName="dnsmasq-dns" containerID="cri-o://6541c5a84a5c1f43a592ced83985462dcf67be32d2f7d625403474f651f3bfdb" gracePeriod=10 Feb 02 17:34:10 crc kubenswrapper[4858]: I0202 17:34:10.507288 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dgbrn" event={"ID":"40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527","Type":"ContainerStarted","Data":"3a8912ade1b9684286e5dbc3736a2395cf89c9ba6ced6715212b5a8427155b2a"} Feb 02 17:34:10 crc kubenswrapper[4858]: I0202 17:34:10.507700 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dgbrn" event={"ID":"40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527","Type":"ContainerStarted","Data":"834441cf1b9836b67cd6fad7b3d8c76df50e0cd5b341048b57b1b780261ae5bc"} Feb 02 17:34:10 crc kubenswrapper[4858]: I0202 17:34:10.512239 4858 generic.go:334] "Generic (PLEG): container finished" podID="21d771a5-5ae8-4ed9-9572-6ff76bb713ec" containerID="6541c5a84a5c1f43a592ced83985462dcf67be32d2f7d625403474f651f3bfdb" exitCode=0 Feb 02 17:34:10 crc kubenswrapper[4858]: I0202 17:34:10.512299 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-4548m" event={"ID":"21d771a5-5ae8-4ed9-9572-6ff76bb713ec","Type":"ContainerDied","Data":"6541c5a84a5c1f43a592ced83985462dcf67be32d2f7d625403474f651f3bfdb"} Feb 02 17:34:10 crc kubenswrapper[4858]: I0202 17:34:10.540937 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-dgbrn" podStartSLOduration=2.54091986 podStartE2EDuration="2.54091986s" podCreationTimestamp="2026-02-02 17:34:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:34:10.523565039 +0000 UTC m=+1151.675980304" watchObservedRunningTime="2026-02-02 17:34:10.54091986 +0000 UTC m=+1151.693335125" Feb 02 17:34:10 crc kubenswrapper[4858]: I0202 17:34:10.546492 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32e8a9b4-688e-42b5-8562-23463e2632c1","Type":"ContainerStarted","Data":"f9b7fd226b943d667946484bf826c00337aff7b788ca11f93c469bf8e2f8912b"} Feb 02 17:34:10 crc kubenswrapper[4858]: I0202 17:34:10.546566 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32e8a9b4-688e-42b5-8562-23463e2632c1","Type":"ContainerStarted","Data":"79330e7ec50261ef3d61448411828c117c5806888025a6ff82098afe15912303"} Feb 02 17:34:10 crc kubenswrapper[4858]: I0202 17:34:10.627723 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-4548m" Feb 02 17:34:10 crc kubenswrapper[4858]: I0202 17:34:10.768677 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/21d771a5-5ae8-4ed9-9572-6ff76bb713ec-dns-swift-storage-0\") pod \"21d771a5-5ae8-4ed9-9572-6ff76bb713ec\" (UID: \"21d771a5-5ae8-4ed9-9572-6ff76bb713ec\") " Feb 02 17:34:10 crc kubenswrapper[4858]: I0202 17:34:10.768846 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffmcr\" (UniqueName: \"kubernetes.io/projected/21d771a5-5ae8-4ed9-9572-6ff76bb713ec-kube-api-access-ffmcr\") pod \"21d771a5-5ae8-4ed9-9572-6ff76bb713ec\" (UID: \"21d771a5-5ae8-4ed9-9572-6ff76bb713ec\") " Feb 02 17:34:10 crc kubenswrapper[4858]: I0202 17:34:10.768943 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21d771a5-5ae8-4ed9-9572-6ff76bb713ec-ovsdbserver-sb\") pod \"21d771a5-5ae8-4ed9-9572-6ff76bb713ec\" (UID: \"21d771a5-5ae8-4ed9-9572-6ff76bb713ec\") " Feb 02 17:34:10 crc kubenswrapper[4858]: I0202 17:34:10.769016 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21d771a5-5ae8-4ed9-9572-6ff76bb713ec-ovsdbserver-nb\") pod \"21d771a5-5ae8-4ed9-9572-6ff76bb713ec\" (UID: \"21d771a5-5ae8-4ed9-9572-6ff76bb713ec\") " Feb 02 17:34:10 crc kubenswrapper[4858]: I0202 17:34:10.769054 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21d771a5-5ae8-4ed9-9572-6ff76bb713ec-dns-svc\") pod \"21d771a5-5ae8-4ed9-9572-6ff76bb713ec\" (UID: \"21d771a5-5ae8-4ed9-9572-6ff76bb713ec\") " Feb 02 17:34:10 crc kubenswrapper[4858]: I0202 17:34:10.769111 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d771a5-5ae8-4ed9-9572-6ff76bb713ec-config\") pod \"21d771a5-5ae8-4ed9-9572-6ff76bb713ec\" (UID: \"21d771a5-5ae8-4ed9-9572-6ff76bb713ec\") " Feb 02 17:34:10 crc kubenswrapper[4858]: I0202 17:34:10.780402 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21d771a5-5ae8-4ed9-9572-6ff76bb713ec-kube-api-access-ffmcr" (OuterVolumeSpecName: "kube-api-access-ffmcr") pod "21d771a5-5ae8-4ed9-9572-6ff76bb713ec" (UID: "21d771a5-5ae8-4ed9-9572-6ff76bb713ec"). InnerVolumeSpecName "kube-api-access-ffmcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:34:10 crc kubenswrapper[4858]: I0202 17:34:10.834030 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21d771a5-5ae8-4ed9-9572-6ff76bb713ec-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "21d771a5-5ae8-4ed9-9572-6ff76bb713ec" (UID: "21d771a5-5ae8-4ed9-9572-6ff76bb713ec"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:34:10 crc kubenswrapper[4858]: I0202 17:34:10.834760 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21d771a5-5ae8-4ed9-9572-6ff76bb713ec-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "21d771a5-5ae8-4ed9-9572-6ff76bb713ec" (UID: "21d771a5-5ae8-4ed9-9572-6ff76bb713ec"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:34:10 crc kubenswrapper[4858]: I0202 17:34:10.836249 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21d771a5-5ae8-4ed9-9572-6ff76bb713ec-config" (OuterVolumeSpecName: "config") pod "21d771a5-5ae8-4ed9-9572-6ff76bb713ec" (UID: "21d771a5-5ae8-4ed9-9572-6ff76bb713ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:34:10 crc kubenswrapper[4858]: I0202 17:34:10.850808 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21d771a5-5ae8-4ed9-9572-6ff76bb713ec-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "21d771a5-5ae8-4ed9-9572-6ff76bb713ec" (UID: "21d771a5-5ae8-4ed9-9572-6ff76bb713ec"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:34:10 crc kubenswrapper[4858]: I0202 17:34:10.854509 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21d771a5-5ae8-4ed9-9572-6ff76bb713ec-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "21d771a5-5ae8-4ed9-9572-6ff76bb713ec" (UID: "21d771a5-5ae8-4ed9-9572-6ff76bb713ec"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:34:10 crc kubenswrapper[4858]: I0202 17:34:10.871989 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffmcr\" (UniqueName: \"kubernetes.io/projected/21d771a5-5ae8-4ed9-9572-6ff76bb713ec-kube-api-access-ffmcr\") on node \"crc\" DevicePath \"\"" Feb 02 17:34:10 crc kubenswrapper[4858]: I0202 17:34:10.872034 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21d771a5-5ae8-4ed9-9572-6ff76bb713ec-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 17:34:10 crc kubenswrapper[4858]: I0202 17:34:10.872048 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21d771a5-5ae8-4ed9-9572-6ff76bb713ec-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 17:34:10 crc kubenswrapper[4858]: I0202 17:34:10.872060 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21d771a5-5ae8-4ed9-9572-6ff76bb713ec-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 17:34:10 crc kubenswrapper[4858]: I0202 17:34:10.872072 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d771a5-5ae8-4ed9-9572-6ff76bb713ec-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:34:10 crc kubenswrapper[4858]: I0202 17:34:10.872087 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/21d771a5-5ae8-4ed9-9572-6ff76bb713ec-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 17:34:11 crc kubenswrapper[4858]: I0202 17:34:11.540124 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-4548m" event={"ID":"21d771a5-5ae8-4ed9-9572-6ff76bb713ec","Type":"ContainerDied","Data":"fc962a06f6d95111c7107f8af7d692763ddfa870a7e113e3a64141ffd6e637b1"} Feb 02 17:34:11 crc kubenswrapper[4858]: I0202 17:34:11.540292 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-4548m" Feb 02 17:34:11 crc kubenswrapper[4858]: I0202 17:34:11.540495 4858 scope.go:117] "RemoveContainer" containerID="6541c5a84a5c1f43a592ced83985462dcf67be32d2f7d625403474f651f3bfdb" Feb 02 17:34:11 crc kubenswrapper[4858]: I0202 17:34:11.572353 4858 scope.go:117] "RemoveContainer" containerID="5e07a949ccd640a6d267d44f0bfc790cda7be8eb82016fad2ddd1b656ea5b06d" Feb 02 17:34:11 crc kubenswrapper[4858]: I0202 17:34:11.603201 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-4548m"] Feb 02 17:34:11 crc kubenswrapper[4858]: I0202 17:34:11.614782 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-4548m"] Feb 02 17:34:12 crc kubenswrapper[4858]: I0202 17:34:12.412374 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21d771a5-5ae8-4ed9-9572-6ff76bb713ec" path="/var/lib/kubelet/pods/21d771a5-5ae8-4ed9-9572-6ff76bb713ec/volumes" Feb 02 17:34:14 crc kubenswrapper[4858]: I0202 17:34:14.568620 4858 generic.go:334] "Generic (PLEG): container finished" podID="40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527" containerID="3a8912ade1b9684286e5dbc3736a2395cf89c9ba6ced6715212b5a8427155b2a" exitCode=0 Feb 02 17:34:14 crc kubenswrapper[4858]: I0202 17:34:14.568700 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dgbrn" event={"ID":"40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527","Type":"ContainerDied","Data":"3a8912ade1b9684286e5dbc3736a2395cf89c9ba6ced6715212b5a8427155b2a"} Feb 02 17:34:16 crc kubenswrapper[4858]: I0202 17:34:16.021154 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dgbrn" Feb 02 17:34:16 crc kubenswrapper[4858]: I0202 17:34:16.076338 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527-config-data\") pod \"40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527\" (UID: \"40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527\") " Feb 02 17:34:16 crc kubenswrapper[4858]: I0202 17:34:16.076444 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527-scripts\") pod \"40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527\" (UID: \"40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527\") " Feb 02 17:34:16 crc kubenswrapper[4858]: I0202 17:34:16.076626 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8dpt\" (UniqueName: \"kubernetes.io/projected/40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527-kube-api-access-v8dpt\") pod \"40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527\" (UID: \"40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527\") " Feb 02 17:34:16 crc kubenswrapper[4858]: I0202 17:34:16.076661 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527-combined-ca-bundle\") pod \"40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527\" (UID: \"40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527\") " Feb 02 17:34:16 crc kubenswrapper[4858]: I0202 17:34:16.085471 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527-scripts" (OuterVolumeSpecName: "scripts") pod "40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527" (UID: "40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:34:16 crc kubenswrapper[4858]: I0202 17:34:16.089659 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527-kube-api-access-v8dpt" (OuterVolumeSpecName: "kube-api-access-v8dpt") pod "40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527" (UID: "40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527"). InnerVolumeSpecName "kube-api-access-v8dpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:34:16 crc kubenswrapper[4858]: I0202 17:34:16.117371 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527" (UID: "40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:34:16 crc kubenswrapper[4858]: I0202 17:34:16.118864 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527-config-data" (OuterVolumeSpecName: "config-data") pod "40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527" (UID: "40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:34:16 crc kubenswrapper[4858]: I0202 17:34:16.179284 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:34:16 crc kubenswrapper[4858]: I0202 17:34:16.179789 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 17:34:16 crc kubenswrapper[4858]: I0202 17:34:16.179803 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8dpt\" (UniqueName: \"kubernetes.io/projected/40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527-kube-api-access-v8dpt\") on node \"crc\" DevicePath \"\"" Feb 02 17:34:16 crc kubenswrapper[4858]: I0202 17:34:16.179819 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:34:16 crc kubenswrapper[4858]: I0202 17:34:16.607435 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dgbrn" event={"ID":"40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527","Type":"ContainerDied","Data":"834441cf1b9836b67cd6fad7b3d8c76df50e0cd5b341048b57b1b780261ae5bc"} Feb 02 17:34:16 crc kubenswrapper[4858]: I0202 17:34:16.607751 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="834441cf1b9836b67cd6fad7b3d8c76df50e0cd5b341048b57b1b780261ae5bc" Feb 02 17:34:16 crc kubenswrapper[4858]: I0202 17:34:16.607476 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dgbrn" Feb 02 17:34:16 crc kubenswrapper[4858]: I0202 17:34:16.612546 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"32e8a9b4-688e-42b5-8562-23463e2632c1","Type":"ContainerStarted","Data":"dfb3c0840aed4ac0cf6d8808191b85ce535b678638cf7cc440f7c24caedee1ec"} Feb 02 17:34:16 crc kubenswrapper[4858]: I0202 17:34:16.612863 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 17:34:16 crc kubenswrapper[4858]: I0202 17:34:16.642812 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.074185547 podStartE2EDuration="9.642791543s" podCreationTimestamp="2026-02-02 17:34:07 +0000 UTC" firstStartedPulling="2026-02-02 17:34:08.347522325 +0000 UTC m=+1149.499937600" lastFinishedPulling="2026-02-02 17:34:15.916128331 +0000 UTC m=+1157.068543596" observedRunningTime="2026-02-02 17:34:16.633249983 +0000 UTC m=+1157.785665248" watchObservedRunningTime="2026-02-02 17:34:16.642791543 +0000 UTC m=+1157.795206818" Feb 02 17:34:16 crc kubenswrapper[4858]: I0202 17:34:16.782377 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 17:34:16 crc kubenswrapper[4858]: I0202 17:34:16.782633 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="53aea7b5-fb90-427a-b245-2c49de3d9ca4" containerName="nova-scheduler-scheduler" containerID="cri-o://ec7f43dd332db924d071caa1f4e6b590845f807c17211d96ce30dc7b9e664e66" gracePeriod=30 Feb 02 17:34:16 crc kubenswrapper[4858]: I0202 17:34:16.793323 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 17:34:16 crc kubenswrapper[4858]: I0202 17:34:16.793645 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="79e199b0-71cf-47cb-bca9-bf04bbea5785" containerName="nova-api-api" containerID="cri-o://91508f145ce93ca955e214b76b0ee3e2122328f13647edd38bac3a4a17c61386" gracePeriod=30 Feb 02 17:34:16 crc kubenswrapper[4858]: I0202 17:34:16.793645 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="79e199b0-71cf-47cb-bca9-bf04bbea5785" containerName="nova-api-log" containerID="cri-o://55d0a3eb9fdd2784e653dda5cee3ef9ce3a3c78d5c8184e3cfdbeb492004c4d4" gracePeriod=30 Feb 02 17:34:16 crc kubenswrapper[4858]: I0202 17:34:16.854909 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 17:34:16 crc kubenswrapper[4858]: I0202 17:34:16.855315 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3ab13622-4d61-4621-b865-45ede50fcaeb" containerName="nova-metadata-metadata" containerID="cri-o://448bcbb4084d67ca18b597fcaa91e184905eb4a03ebf3a5ae6a82dd3252c3092" gracePeriod=30 Feb 02 17:34:16 crc kubenswrapper[4858]: I0202 17:34:16.855219 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3ab13622-4d61-4621-b865-45ede50fcaeb" containerName="nova-metadata-log" containerID="cri-o://a45ca9c3436d4e9a318561a0f4518b985cf1546a2c8e186de80f2e05bab94b59" gracePeriod=30 Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.114549 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.121749 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.125218 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79e199b0-71cf-47cb-bca9-bf04bbea5785-config-data\") pod \"79e199b0-71cf-47cb-bca9-bf04bbea5785\" (UID: \"79e199b0-71cf-47cb-bca9-bf04bbea5785\") " Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.125311 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53aea7b5-fb90-427a-b245-2c49de3d9ca4-config-data\") pod \"53aea7b5-fb90-427a-b245-2c49de3d9ca4\" (UID: \"53aea7b5-fb90-427a-b245-2c49de3d9ca4\") " Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.125338 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/79e199b0-71cf-47cb-bca9-bf04bbea5785-public-tls-certs\") pod \"79e199b0-71cf-47cb-bca9-bf04bbea5785\" (UID: \"79e199b0-71cf-47cb-bca9-bf04bbea5785\") " Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.125381 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79e199b0-71cf-47cb-bca9-bf04bbea5785-logs\") pod \"79e199b0-71cf-47cb-bca9-bf04bbea5785\" (UID: \"79e199b0-71cf-47cb-bca9-bf04bbea5785\") " Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.125404 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ch9s\" (UniqueName: \"kubernetes.io/projected/53aea7b5-fb90-427a-b245-2c49de3d9ca4-kube-api-access-9ch9s\") pod \"53aea7b5-fb90-427a-b245-2c49de3d9ca4\" (UID: \"53aea7b5-fb90-427a-b245-2c49de3d9ca4\") " Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.125445 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53aea7b5-fb90-427a-b245-2c49de3d9ca4-combined-ca-bundle\") pod \"53aea7b5-fb90-427a-b245-2c49de3d9ca4\" (UID: \"53aea7b5-fb90-427a-b245-2c49de3d9ca4\") " Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.125478 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5kxt\" (UniqueName: \"kubernetes.io/projected/79e199b0-71cf-47cb-bca9-bf04bbea5785-kube-api-access-w5kxt\") pod \"79e199b0-71cf-47cb-bca9-bf04bbea5785\" (UID: \"79e199b0-71cf-47cb-bca9-bf04bbea5785\") " Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.125504 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/79e199b0-71cf-47cb-bca9-bf04bbea5785-internal-tls-certs\") pod \"79e199b0-71cf-47cb-bca9-bf04bbea5785\" (UID: \"79e199b0-71cf-47cb-bca9-bf04bbea5785\") " Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.125529 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e199b0-71cf-47cb-bca9-bf04bbea5785-combined-ca-bundle\") pod \"79e199b0-71cf-47cb-bca9-bf04bbea5785\" (UID: \"79e199b0-71cf-47cb-bca9-bf04bbea5785\") " Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.125833 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79e199b0-71cf-47cb-bca9-bf04bbea5785-logs" (OuterVolumeSpecName: "logs") pod "79e199b0-71cf-47cb-bca9-bf04bbea5785" (UID: "79e199b0-71cf-47cb-bca9-bf04bbea5785"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.134683 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53aea7b5-fb90-427a-b245-2c49de3d9ca4-kube-api-access-9ch9s" (OuterVolumeSpecName: "kube-api-access-9ch9s") pod "53aea7b5-fb90-427a-b245-2c49de3d9ca4" (UID: "53aea7b5-fb90-427a-b245-2c49de3d9ca4"). InnerVolumeSpecName "kube-api-access-9ch9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.139882 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79e199b0-71cf-47cb-bca9-bf04bbea5785-kube-api-access-w5kxt" (OuterVolumeSpecName: "kube-api-access-w5kxt") pod "79e199b0-71cf-47cb-bca9-bf04bbea5785" (UID: "79e199b0-71cf-47cb-bca9-bf04bbea5785"). InnerVolumeSpecName "kube-api-access-w5kxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.176313 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79e199b0-71cf-47cb-bca9-bf04bbea5785-config-data" (OuterVolumeSpecName: "config-data") pod "79e199b0-71cf-47cb-bca9-bf04bbea5785" (UID: "79e199b0-71cf-47cb-bca9-bf04bbea5785"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.192200 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53aea7b5-fb90-427a-b245-2c49de3d9ca4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53aea7b5-fb90-427a-b245-2c49de3d9ca4" (UID: "53aea7b5-fb90-427a-b245-2c49de3d9ca4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.198479 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53aea7b5-fb90-427a-b245-2c49de3d9ca4-config-data" (OuterVolumeSpecName: "config-data") pod "53aea7b5-fb90-427a-b245-2c49de3d9ca4" (UID: "53aea7b5-fb90-427a-b245-2c49de3d9ca4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.206460 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79e199b0-71cf-47cb-bca9-bf04bbea5785-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "79e199b0-71cf-47cb-bca9-bf04bbea5785" (UID: "79e199b0-71cf-47cb-bca9-bf04bbea5785"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.209475 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79e199b0-71cf-47cb-bca9-bf04bbea5785-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "79e199b0-71cf-47cb-bca9-bf04bbea5785" (UID: "79e199b0-71cf-47cb-bca9-bf04bbea5785"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.209926 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79e199b0-71cf-47cb-bca9-bf04bbea5785-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79e199b0-71cf-47cb-bca9-bf04bbea5785" (UID: "79e199b0-71cf-47cb-bca9-bf04bbea5785"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.226948 4858 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/79e199b0-71cf-47cb-bca9-bf04bbea5785-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.226998 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79e199b0-71cf-47cb-bca9-bf04bbea5785-logs\") on node \"crc\" DevicePath \"\"" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.227010 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ch9s\" (UniqueName: \"kubernetes.io/projected/53aea7b5-fb90-427a-b245-2c49de3d9ca4-kube-api-access-9ch9s\") on node \"crc\" DevicePath \"\"" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.227020 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53aea7b5-fb90-427a-b245-2c49de3d9ca4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.227029 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5kxt\" (UniqueName: \"kubernetes.io/projected/79e199b0-71cf-47cb-bca9-bf04bbea5785-kube-api-access-w5kxt\") on node \"crc\" DevicePath \"\"" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.227038 4858 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/79e199b0-71cf-47cb-bca9-bf04bbea5785-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.227047 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79e199b0-71cf-47cb-bca9-bf04bbea5785-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.227055 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79e199b0-71cf-47cb-bca9-bf04bbea5785-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.227062 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53aea7b5-fb90-427a-b245-2c49de3d9ca4-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.260789 4858 generic.go:334] "Generic (PLEG): container finished" podID="79e199b0-71cf-47cb-bca9-bf04bbea5785" containerID="91508f145ce93ca955e214b76b0ee3e2122328f13647edd38bac3a4a17c61386" exitCode=0 Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.260823 4858 generic.go:334] "Generic (PLEG): container finished" podID="79e199b0-71cf-47cb-bca9-bf04bbea5785" containerID="55d0a3eb9fdd2784e653dda5cee3ef9ce3a3c78d5c8184e3cfdbeb492004c4d4" exitCode=143 Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.260856 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"79e199b0-71cf-47cb-bca9-bf04bbea5785","Type":"ContainerDied","Data":"91508f145ce93ca955e214b76b0ee3e2122328f13647edd38bac3a4a17c61386"} Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.260882 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"79e199b0-71cf-47cb-bca9-bf04bbea5785","Type":"ContainerDied","Data":"55d0a3eb9fdd2784e653dda5cee3ef9ce3a3c78d5c8184e3cfdbeb492004c4d4"} Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.260893 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"79e199b0-71cf-47cb-bca9-bf04bbea5785","Type":"ContainerDied","Data":"c91143642dc521c54c7ea42037a43a003b422005001ce52b3b19c435922ae2cf"} Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.260907 4858 scope.go:117] "RemoveContainer" containerID="91508f145ce93ca955e214b76b0ee3e2122328f13647edd38bac3a4a17c61386" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.261037 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.271531 4858 generic.go:334] "Generic (PLEG): container finished" podID="3ab13622-4d61-4621-b865-45ede50fcaeb" containerID="a45ca9c3436d4e9a318561a0f4518b985cf1546a2c8e186de80f2e05bab94b59" exitCode=143 Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.271721 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3ab13622-4d61-4621-b865-45ede50fcaeb","Type":"ContainerDied","Data":"a45ca9c3436d4e9a318561a0f4518b985cf1546a2c8e186de80f2e05bab94b59"} Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.274915 4858 generic.go:334] "Generic (PLEG): container finished" podID="53aea7b5-fb90-427a-b245-2c49de3d9ca4" containerID="ec7f43dd332db924d071caa1f4e6b590845f807c17211d96ce30dc7b9e664e66" exitCode=0 Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.274990 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.275005 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"53aea7b5-fb90-427a-b245-2c49de3d9ca4","Type":"ContainerDied","Data":"ec7f43dd332db924d071caa1f4e6b590845f807c17211d96ce30dc7b9e664e66"} Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.275307 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"53aea7b5-fb90-427a-b245-2c49de3d9ca4","Type":"ContainerDied","Data":"e6c0a9184e65b32c8ad985698375fc19933627063ebc413e19bd64668cb24193"} Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.305631 4858 scope.go:117] "RemoveContainer" containerID="55d0a3eb9fdd2784e653dda5cee3ef9ce3a3c78d5c8184e3cfdbeb492004c4d4" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.324560 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.360379 4858 scope.go:117] "RemoveContainer" containerID="91508f145ce93ca955e214b76b0ee3e2122328f13647edd38bac3a4a17c61386" Feb 02 17:34:19 crc kubenswrapper[4858]: E0202 17:34:19.361254 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91508f145ce93ca955e214b76b0ee3e2122328f13647edd38bac3a4a17c61386\": container with ID starting with 91508f145ce93ca955e214b76b0ee3e2122328f13647edd38bac3a4a17c61386 not found: ID does not exist" containerID="91508f145ce93ca955e214b76b0ee3e2122328f13647edd38bac3a4a17c61386" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.361332 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91508f145ce93ca955e214b76b0ee3e2122328f13647edd38bac3a4a17c61386"} err="failed to get container status \"91508f145ce93ca955e214b76b0ee3e2122328f13647edd38bac3a4a17c61386\": rpc error: code = NotFound desc = could not find container \"91508f145ce93ca955e214b76b0ee3e2122328f13647edd38bac3a4a17c61386\": container with ID starting with 91508f145ce93ca955e214b76b0ee3e2122328f13647edd38bac3a4a17c61386 not found: ID does not exist" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.361361 4858 scope.go:117] "RemoveContainer" containerID="55d0a3eb9fdd2784e653dda5cee3ef9ce3a3c78d5c8184e3cfdbeb492004c4d4" Feb 02 17:34:19 crc kubenswrapper[4858]: E0202 17:34:19.375883 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55d0a3eb9fdd2784e653dda5cee3ef9ce3a3c78d5c8184e3cfdbeb492004c4d4\": container with ID starting with 55d0a3eb9fdd2784e653dda5cee3ef9ce3a3c78d5c8184e3cfdbeb492004c4d4 not found: ID does not exist" containerID="55d0a3eb9fdd2784e653dda5cee3ef9ce3a3c78d5c8184e3cfdbeb492004c4d4" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.375961 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55d0a3eb9fdd2784e653dda5cee3ef9ce3a3c78d5c8184e3cfdbeb492004c4d4"} err="failed to get container status \"55d0a3eb9fdd2784e653dda5cee3ef9ce3a3c78d5c8184e3cfdbeb492004c4d4\": rpc error: code = NotFound desc = could not find container \"55d0a3eb9fdd2784e653dda5cee3ef9ce3a3c78d5c8184e3cfdbeb492004c4d4\": container with ID starting with 55d0a3eb9fdd2784e653dda5cee3ef9ce3a3c78d5c8184e3cfdbeb492004c4d4 not found: ID does not exist" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.376108 4858 scope.go:117] "RemoveContainer" containerID="91508f145ce93ca955e214b76b0ee3e2122328f13647edd38bac3a4a17c61386" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.381460 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91508f145ce93ca955e214b76b0ee3e2122328f13647edd38bac3a4a17c61386"} err="failed to get container status \"91508f145ce93ca955e214b76b0ee3e2122328f13647edd38bac3a4a17c61386\": rpc error: code = NotFound desc = could not find container \"91508f145ce93ca955e214b76b0ee3e2122328f13647edd38bac3a4a17c61386\": container with ID starting with 91508f145ce93ca955e214b76b0ee3e2122328f13647edd38bac3a4a17c61386 not found: ID does not exist" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.381514 4858 scope.go:117] "RemoveContainer" containerID="55d0a3eb9fdd2784e653dda5cee3ef9ce3a3c78d5c8184e3cfdbeb492004c4d4" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.382729 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55d0a3eb9fdd2784e653dda5cee3ef9ce3a3c78d5c8184e3cfdbeb492004c4d4"} err="failed to get container status \"55d0a3eb9fdd2784e653dda5cee3ef9ce3a3c78d5c8184e3cfdbeb492004c4d4\": rpc error: code = NotFound desc = could not find container \"55d0a3eb9fdd2784e653dda5cee3ef9ce3a3c78d5c8184e3cfdbeb492004c4d4\": container with ID starting with 55d0a3eb9fdd2784e653dda5cee3ef9ce3a3c78d5c8184e3cfdbeb492004c4d4 not found: ID does not exist" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.382761 4858 scope.go:117] "RemoveContainer" containerID="ec7f43dd332db924d071caa1f4e6b590845f807c17211d96ce30dc7b9e664e66" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.393289 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.437930 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.439329 4858 scope.go:117] "RemoveContainer" containerID="ec7f43dd332db924d071caa1f4e6b590845f807c17211d96ce30dc7b9e664e66" Feb 02 17:34:19 crc kubenswrapper[4858]: E0202 17:34:19.439942 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec7f43dd332db924d071caa1f4e6b590845f807c17211d96ce30dc7b9e664e66\": container with ID starting with ec7f43dd332db924d071caa1f4e6b590845f807c17211d96ce30dc7b9e664e66 not found: ID does not exist" containerID="ec7f43dd332db924d071caa1f4e6b590845f807c17211d96ce30dc7b9e664e66" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.440185 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec7f43dd332db924d071caa1f4e6b590845f807c17211d96ce30dc7b9e664e66"} err="failed to get container status \"ec7f43dd332db924d071caa1f4e6b590845f807c17211d96ce30dc7b9e664e66\": rpc error: code = NotFound desc = could not find container \"ec7f43dd332db924d071caa1f4e6b590845f807c17211d96ce30dc7b9e664e66\": container with ID starting with ec7f43dd332db924d071caa1f4e6b590845f807c17211d96ce30dc7b9e664e66 not found: ID does not exist" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.447927 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.458381 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 17:34:19 crc kubenswrapper[4858]: E0202 17:34:19.459063 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e199b0-71cf-47cb-bca9-bf04bbea5785" containerName="nova-api-log" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.459168 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e199b0-71cf-47cb-bca9-bf04bbea5785" containerName="nova-api-log" Feb 02 17:34:19 crc kubenswrapper[4858]: E0202 17:34:19.459252 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21d771a5-5ae8-4ed9-9572-6ff76bb713ec" containerName="init" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.459321 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="21d771a5-5ae8-4ed9-9572-6ff76bb713ec" containerName="init" Feb 02 17:34:19 crc kubenswrapper[4858]: E0202 17:34:19.459468 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21d771a5-5ae8-4ed9-9572-6ff76bb713ec" containerName="dnsmasq-dns" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.459548 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="21d771a5-5ae8-4ed9-9572-6ff76bb713ec" containerName="dnsmasq-dns" Feb 02 17:34:19 crc kubenswrapper[4858]: E0202 17:34:19.459645 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53aea7b5-fb90-427a-b245-2c49de3d9ca4" containerName="nova-scheduler-scheduler" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.459720 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="53aea7b5-fb90-427a-b245-2c49de3d9ca4" containerName="nova-scheduler-scheduler" Feb 02 17:34:19 crc kubenswrapper[4858]: E0202 17:34:19.459806 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527" containerName="nova-manage" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.459884 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527" containerName="nova-manage" Feb 02 17:34:19 crc kubenswrapper[4858]: E0202 17:34:19.459994 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e199b0-71cf-47cb-bca9-bf04bbea5785" containerName="nova-api-api" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.460080 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e199b0-71cf-47cb-bca9-bf04bbea5785" containerName="nova-api-api" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.460382 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="21d771a5-5ae8-4ed9-9572-6ff76bb713ec" containerName="dnsmasq-dns" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.460483 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="53aea7b5-fb90-427a-b245-2c49de3d9ca4" containerName="nova-scheduler-scheduler" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.460564 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527" containerName="nova-manage" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.460653 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="79e199b0-71cf-47cb-bca9-bf04bbea5785" containerName="nova-api-log" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.460733 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="79e199b0-71cf-47cb-bca9-bf04bbea5785" containerName="nova-api-api" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.462100 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.465043 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.465424 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.465514 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.473305 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.475921 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.481943 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.497068 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.506086 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.536257 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7215c3c5-9746-4192-b018-0c31b42cee4d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7215c3c5-9746-4192-b018-0c31b42cee4d\") " pod="openstack/nova-scheduler-0" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.536390 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5402e6ff-48ec-47b2-b68e-3385e51ec388-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5402e6ff-48ec-47b2-b68e-3385e51ec388\") " pod="openstack/nova-api-0" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.536430 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5402e6ff-48ec-47b2-b68e-3385e51ec388-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5402e6ff-48ec-47b2-b68e-3385e51ec388\") " pod="openstack/nova-api-0" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.536584 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7215c3c5-9746-4192-b018-0c31b42cee4d-config-data\") pod \"nova-scheduler-0\" (UID: \"7215c3c5-9746-4192-b018-0c31b42cee4d\") " pod="openstack/nova-scheduler-0" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.536625 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b4lr\" (UniqueName: \"kubernetes.io/projected/7215c3c5-9746-4192-b018-0c31b42cee4d-kube-api-access-9b4lr\") pod \"nova-scheduler-0\" (UID: \"7215c3c5-9746-4192-b018-0c31b42cee4d\") " pod="openstack/nova-scheduler-0" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.536665 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xswpp\" (UniqueName: \"kubernetes.io/projected/5402e6ff-48ec-47b2-b68e-3385e51ec388-kube-api-access-xswpp\") pod \"nova-api-0\" (UID: \"5402e6ff-48ec-47b2-b68e-3385e51ec388\") " pod="openstack/nova-api-0" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.536697 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5402e6ff-48ec-47b2-b68e-3385e51ec388-config-data\") pod \"nova-api-0\" (UID: \"5402e6ff-48ec-47b2-b68e-3385e51ec388\") " pod="openstack/nova-api-0" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.536885 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5402e6ff-48ec-47b2-b68e-3385e51ec388-public-tls-certs\") pod \"nova-api-0\" (UID: \"5402e6ff-48ec-47b2-b68e-3385e51ec388\") " pod="openstack/nova-api-0" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.536913 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5402e6ff-48ec-47b2-b68e-3385e51ec388-logs\") pod \"nova-api-0\" (UID: \"5402e6ff-48ec-47b2-b68e-3385e51ec388\") " pod="openstack/nova-api-0" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.638165 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5402e6ff-48ec-47b2-b68e-3385e51ec388-logs\") pod \"nova-api-0\" (UID: \"5402e6ff-48ec-47b2-b68e-3385e51ec388\") " pod="openstack/nova-api-0" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.638484 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7215c3c5-9746-4192-b018-0c31b42cee4d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7215c3c5-9746-4192-b018-0c31b42cee4d\") " pod="openstack/nova-scheduler-0" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.638628 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5402e6ff-48ec-47b2-b68e-3385e51ec388-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5402e6ff-48ec-47b2-b68e-3385e51ec388\") " pod="openstack/nova-api-0" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.638760 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5402e6ff-48ec-47b2-b68e-3385e51ec388-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5402e6ff-48ec-47b2-b68e-3385e51ec388\") " pod="openstack/nova-api-0" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.638888 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7215c3c5-9746-4192-b018-0c31b42cee4d-config-data\") pod \"nova-scheduler-0\" (UID: \"7215c3c5-9746-4192-b018-0c31b42cee4d\") " pod="openstack/nova-scheduler-0" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.639048 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b4lr\" (UniqueName: \"kubernetes.io/projected/7215c3c5-9746-4192-b018-0c31b42cee4d-kube-api-access-9b4lr\") pod \"nova-scheduler-0\" (UID: \"7215c3c5-9746-4192-b018-0c31b42cee4d\") " pod="openstack/nova-scheduler-0" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.639174 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xswpp\" (UniqueName: \"kubernetes.io/projected/5402e6ff-48ec-47b2-b68e-3385e51ec388-kube-api-access-xswpp\") pod \"nova-api-0\" (UID: \"5402e6ff-48ec-47b2-b68e-3385e51ec388\") " pod="openstack/nova-api-0" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.639290 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5402e6ff-48ec-47b2-b68e-3385e51ec388-config-data\") pod \"nova-api-0\" (UID: \"5402e6ff-48ec-47b2-b68e-3385e51ec388\") " pod="openstack/nova-api-0" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.639377 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5402e6ff-48ec-47b2-b68e-3385e51ec388-public-tls-certs\") pod \"nova-api-0\" (UID: \"5402e6ff-48ec-47b2-b68e-3385e51ec388\") " pod="openstack/nova-api-0" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.638998 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5402e6ff-48ec-47b2-b68e-3385e51ec388-logs\") pod \"nova-api-0\" (UID: \"5402e6ff-48ec-47b2-b68e-3385e51ec388\") " pod="openstack/nova-api-0" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.645344 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7215c3c5-9746-4192-b018-0c31b42cee4d-config-data\") pod \"nova-scheduler-0\" (UID: \"7215c3c5-9746-4192-b018-0c31b42cee4d\") " pod="openstack/nova-scheduler-0" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.645644 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5402e6ff-48ec-47b2-b68e-3385e51ec388-config-data\") pod \"nova-api-0\" (UID: \"5402e6ff-48ec-47b2-b68e-3385e51ec388\") " pod="openstack/nova-api-0" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.646695 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7215c3c5-9746-4192-b018-0c31b42cee4d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7215c3c5-9746-4192-b018-0c31b42cee4d\") " pod="openstack/nova-scheduler-0" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.649563 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5402e6ff-48ec-47b2-b68e-3385e51ec388-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5402e6ff-48ec-47b2-b68e-3385e51ec388\") " pod="openstack/nova-api-0" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.651414 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5402e6ff-48ec-47b2-b68e-3385e51ec388-public-tls-certs\") pod \"nova-api-0\" (UID: \"5402e6ff-48ec-47b2-b68e-3385e51ec388\") " pod="openstack/nova-api-0" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.658937 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5402e6ff-48ec-47b2-b68e-3385e51ec388-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5402e6ff-48ec-47b2-b68e-3385e51ec388\") " pod="openstack/nova-api-0" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.662697 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xswpp\" (UniqueName: \"kubernetes.io/projected/5402e6ff-48ec-47b2-b68e-3385e51ec388-kube-api-access-xswpp\") pod \"nova-api-0\" (UID: \"5402e6ff-48ec-47b2-b68e-3385e51ec388\") " pod="openstack/nova-api-0" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.664220 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b4lr\" (UniqueName: \"kubernetes.io/projected/7215c3c5-9746-4192-b018-0c31b42cee4d-kube-api-access-9b4lr\") pod \"nova-scheduler-0\" (UID: \"7215c3c5-9746-4192-b018-0c31b42cee4d\") " pod="openstack/nova-scheduler-0" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.787400 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 17:34:19 crc kubenswrapper[4858]: I0202 17:34:19.801935 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 17:34:20 crc kubenswrapper[4858]: I0202 17:34:20.305887 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 17:34:20 crc kubenswrapper[4858]: W0202 17:34:20.308352 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5402e6ff_48ec_47b2_b68e_3385e51ec388.slice/crio-52d79b50ee34e258ef8cad05393f171f59065126a406e061414b67e0b856b891 WatchSource:0}: Error finding container 52d79b50ee34e258ef8cad05393f171f59065126a406e061414b67e0b856b891: Status 404 returned error can't find the container with id 52d79b50ee34e258ef8cad05393f171f59065126a406e061414b67e0b856b891 Feb 02 17:34:20 crc kubenswrapper[4858]: I0202 17:34:20.383141 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 17:34:20 crc kubenswrapper[4858]: I0202 17:34:20.415636 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53aea7b5-fb90-427a-b245-2c49de3d9ca4" path="/var/lib/kubelet/pods/53aea7b5-fb90-427a-b245-2c49de3d9ca4/volumes" Feb 02 17:34:20 crc kubenswrapper[4858]: I0202 17:34:20.416616 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79e199b0-71cf-47cb-bca9-bf04bbea5785" path="/var/lib/kubelet/pods/79e199b0-71cf-47cb-bca9-bf04bbea5785/volumes" Feb 02 17:34:21 crc kubenswrapper[4858]: I0202 17:34:21.348083 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5402e6ff-48ec-47b2-b68e-3385e51ec388","Type":"ContainerStarted","Data":"b85b8d8fd8041fe1d8b4b204bb0601d2748db050f4b39ec10e7c25153ef680d4"} Feb 02 17:34:21 crc kubenswrapper[4858]: I0202 17:34:21.348436 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5402e6ff-48ec-47b2-b68e-3385e51ec388","Type":"ContainerStarted","Data":"2469e7be4ce20149edf2643bdcc18259c2914bb4599cf7e60577a1fcbf6a11f0"} Feb 02 17:34:21 crc kubenswrapper[4858]: I0202 17:34:21.348450 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5402e6ff-48ec-47b2-b68e-3385e51ec388","Type":"ContainerStarted","Data":"52d79b50ee34e258ef8cad05393f171f59065126a406e061414b67e0b856b891"} Feb 02 17:34:21 crc kubenswrapper[4858]: I0202 17:34:21.355672 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7215c3c5-9746-4192-b018-0c31b42cee4d","Type":"ContainerStarted","Data":"7e578312666f973cb20a48572fb9a0935bc5f4755d5cb417854416d9ae167ee6"} Feb 02 17:34:21 crc kubenswrapper[4858]: I0202 17:34:21.355727 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7215c3c5-9746-4192-b018-0c31b42cee4d","Type":"ContainerStarted","Data":"8ace9c8f0d4813a054fda316af23392e1e014d9b613136a4a0202835580972ec"} Feb 02 17:34:21 crc kubenswrapper[4858]: I0202 17:34:21.402716 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.402695734 podStartE2EDuration="2.402695734s" podCreationTimestamp="2026-02-02 17:34:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:34:21.394506052 +0000 UTC m=+1162.546921327" watchObservedRunningTime="2026-02-02 17:34:21.402695734 +0000 UTC m=+1162.555110989" Feb 02 17:34:21 crc kubenswrapper[4858]: I0202 17:34:21.428963 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.428943546 podStartE2EDuration="2.428943546s" podCreationTimestamp="2026-02-02 17:34:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:34:21.423489542 +0000 UTC m=+1162.575904827" watchObservedRunningTime="2026-02-02 17:34:21.428943546 +0000 UTC m=+1162.581358811" Feb 02 17:34:21 crc kubenswrapper[4858]: I0202 17:34:21.540910 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="3ab13622-4d61-4621-b865-45ede50fcaeb" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": read tcp 10.217.0.2:51154->10.217.0.199:8775: read: connection reset by peer" Feb 02 17:34:21 crc kubenswrapper[4858]: I0202 17:34:21.541014 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="3ab13622-4d61-4621-b865-45ede50fcaeb" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": read tcp 10.217.0.2:51156->10.217.0.199:8775: read: connection reset by peer" Feb 02 17:34:21 crc kubenswrapper[4858]: I0202 17:34:21.995295 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.091347 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ab13622-4d61-4621-b865-45ede50fcaeb-logs\") pod \"3ab13622-4d61-4621-b865-45ede50fcaeb\" (UID: \"3ab13622-4d61-4621-b865-45ede50fcaeb\") " Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.091416 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab13622-4d61-4621-b865-45ede50fcaeb-combined-ca-bundle\") pod \"3ab13622-4d61-4621-b865-45ede50fcaeb\" (UID: \"3ab13622-4d61-4621-b865-45ede50fcaeb\") " Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.091448 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ab13622-4d61-4621-b865-45ede50fcaeb-config-data\") pod \"3ab13622-4d61-4621-b865-45ede50fcaeb\" (UID: \"3ab13622-4d61-4621-b865-45ede50fcaeb\") " Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.091495 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab13622-4d61-4621-b865-45ede50fcaeb-nova-metadata-tls-certs\") pod \"3ab13622-4d61-4621-b865-45ede50fcaeb\" (UID: \"3ab13622-4d61-4621-b865-45ede50fcaeb\") " Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.091518 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpgdq\" (UniqueName: \"kubernetes.io/projected/3ab13622-4d61-4621-b865-45ede50fcaeb-kube-api-access-qpgdq\") pod \"3ab13622-4d61-4621-b865-45ede50fcaeb\" (UID: \"3ab13622-4d61-4621-b865-45ede50fcaeb\") " Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.091790 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ab13622-4d61-4621-b865-45ede50fcaeb-logs" (OuterVolumeSpecName: "logs") pod "3ab13622-4d61-4621-b865-45ede50fcaeb" (UID: "3ab13622-4d61-4621-b865-45ede50fcaeb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.115329 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab13622-4d61-4621-b865-45ede50fcaeb-kube-api-access-qpgdq" (OuterVolumeSpecName: "kube-api-access-qpgdq") pod "3ab13622-4d61-4621-b865-45ede50fcaeb" (UID: "3ab13622-4d61-4621-b865-45ede50fcaeb"). InnerVolumeSpecName "kube-api-access-qpgdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.139773 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab13622-4d61-4621-b865-45ede50fcaeb-config-data" (OuterVolumeSpecName: "config-data") pod "3ab13622-4d61-4621-b865-45ede50fcaeb" (UID: "3ab13622-4d61-4621-b865-45ede50fcaeb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.141177 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab13622-4d61-4621-b865-45ede50fcaeb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ab13622-4d61-4621-b865-45ede50fcaeb" (UID: "3ab13622-4d61-4621-b865-45ede50fcaeb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.149172 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab13622-4d61-4621-b865-45ede50fcaeb-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "3ab13622-4d61-4621-b865-45ede50fcaeb" (UID: "3ab13622-4d61-4621-b865-45ede50fcaeb"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.193604 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab13622-4d61-4621-b865-45ede50fcaeb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.193642 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ab13622-4d61-4621-b865-45ede50fcaeb-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.193657 4858 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab13622-4d61-4621-b865-45ede50fcaeb-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.193670 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpgdq\" (UniqueName: \"kubernetes.io/projected/3ab13622-4d61-4621-b865-45ede50fcaeb-kube-api-access-qpgdq\") on node \"crc\" DevicePath \"\"" Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.193678 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ab13622-4d61-4621-b865-45ede50fcaeb-logs\") on node \"crc\" DevicePath \"\"" Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.371296 4858 generic.go:334] "Generic (PLEG): container finished" podID="3ab13622-4d61-4621-b865-45ede50fcaeb" containerID="448bcbb4084d67ca18b597fcaa91e184905eb4a03ebf3a5ae6a82dd3252c3092" exitCode=0 Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.371367 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3ab13622-4d61-4621-b865-45ede50fcaeb","Type":"ContainerDied","Data":"448bcbb4084d67ca18b597fcaa91e184905eb4a03ebf3a5ae6a82dd3252c3092"} Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.371935 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3ab13622-4d61-4621-b865-45ede50fcaeb","Type":"ContainerDied","Data":"17f784dd9e255dab5d38857ab55d0ebc99ba6b4b8c363401a3ef7f50999758bc"} Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.371371 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.372003 4858 scope.go:117] "RemoveContainer" containerID="448bcbb4084d67ca18b597fcaa91e184905eb4a03ebf3a5ae6a82dd3252c3092" Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.435800 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.436561 4858 scope.go:117] "RemoveContainer" containerID="a45ca9c3436d4e9a318561a0f4518b985cf1546a2c8e186de80f2e05bab94b59" Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.447608 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.464960 4858 scope.go:117] "RemoveContainer" containerID="448bcbb4084d67ca18b597fcaa91e184905eb4a03ebf3a5ae6a82dd3252c3092" Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.465429 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 17:34:22 crc kubenswrapper[4858]: E0202 17:34:22.465481 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"448bcbb4084d67ca18b597fcaa91e184905eb4a03ebf3a5ae6a82dd3252c3092\": container with ID starting with 448bcbb4084d67ca18b597fcaa91e184905eb4a03ebf3a5ae6a82dd3252c3092 not found: ID does not exist" containerID="448bcbb4084d67ca18b597fcaa91e184905eb4a03ebf3a5ae6a82dd3252c3092" Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.465528 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"448bcbb4084d67ca18b597fcaa91e184905eb4a03ebf3a5ae6a82dd3252c3092"} err="failed to get container status \"448bcbb4084d67ca18b597fcaa91e184905eb4a03ebf3a5ae6a82dd3252c3092\": rpc error: code = NotFound desc = could not find container \"448bcbb4084d67ca18b597fcaa91e184905eb4a03ebf3a5ae6a82dd3252c3092\": container with ID starting with 448bcbb4084d67ca18b597fcaa91e184905eb4a03ebf3a5ae6a82dd3252c3092 not found: ID does not exist" Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.465562 4858 scope.go:117] "RemoveContainer" containerID="a45ca9c3436d4e9a318561a0f4518b985cf1546a2c8e186de80f2e05bab94b59" Feb 02 17:34:22 crc kubenswrapper[4858]: E0202 17:34:22.466266 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a45ca9c3436d4e9a318561a0f4518b985cf1546a2c8e186de80f2e05bab94b59\": container with ID starting with a45ca9c3436d4e9a318561a0f4518b985cf1546a2c8e186de80f2e05bab94b59 not found: ID does not exist" containerID="a45ca9c3436d4e9a318561a0f4518b985cf1546a2c8e186de80f2e05bab94b59" Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.466320 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a45ca9c3436d4e9a318561a0f4518b985cf1546a2c8e186de80f2e05bab94b59"} err="failed to get container status \"a45ca9c3436d4e9a318561a0f4518b985cf1546a2c8e186de80f2e05bab94b59\": rpc error: code = NotFound desc = could not find container \"a45ca9c3436d4e9a318561a0f4518b985cf1546a2c8e186de80f2e05bab94b59\": container with ID starting with a45ca9c3436d4e9a318561a0f4518b985cf1546a2c8e186de80f2e05bab94b59 not found: ID does not exist" Feb 02 17:34:22 crc kubenswrapper[4858]: E0202 17:34:22.468668 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ab13622-4d61-4621-b865-45ede50fcaeb" containerName="nova-metadata-log" Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.468701 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ab13622-4d61-4621-b865-45ede50fcaeb" containerName="nova-metadata-log" Feb 02 17:34:22 crc kubenswrapper[4858]: E0202 17:34:22.468746 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ab13622-4d61-4621-b865-45ede50fcaeb" containerName="nova-metadata-metadata" Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.468755 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ab13622-4d61-4621-b865-45ede50fcaeb" containerName="nova-metadata-metadata" Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.468996 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ab13622-4d61-4621-b865-45ede50fcaeb" containerName="nova-metadata-log" Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.469025 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ab13622-4d61-4621-b865-45ede50fcaeb" containerName="nova-metadata-metadata" Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.470294 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.473930 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.474226 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.484171 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.600237 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52ad277d-ba1e-4129-b696-f4fa1a598d72-logs\") pod \"nova-metadata-0\" (UID: \"52ad277d-ba1e-4129-b696-f4fa1a598d72\") " pod="openstack/nova-metadata-0" Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.600302 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/52ad277d-ba1e-4129-b696-f4fa1a598d72-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"52ad277d-ba1e-4129-b696-f4fa1a598d72\") " pod="openstack/nova-metadata-0" Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.600518 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52ad277d-ba1e-4129-b696-f4fa1a598d72-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"52ad277d-ba1e-4129-b696-f4fa1a598d72\") " pod="openstack/nova-metadata-0" Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.600608 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5p8v\" (UniqueName: \"kubernetes.io/projected/52ad277d-ba1e-4129-b696-f4fa1a598d72-kube-api-access-r5p8v\") pod \"nova-metadata-0\" (UID: \"52ad277d-ba1e-4129-b696-f4fa1a598d72\") " pod="openstack/nova-metadata-0" Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.600650 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52ad277d-ba1e-4129-b696-f4fa1a598d72-config-data\") pod \"nova-metadata-0\" (UID: \"52ad277d-ba1e-4129-b696-f4fa1a598d72\") " pod="openstack/nova-metadata-0" Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.702661 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52ad277d-ba1e-4129-b696-f4fa1a598d72-config-data\") pod \"nova-metadata-0\" (UID: \"52ad277d-ba1e-4129-b696-f4fa1a598d72\") " pod="openstack/nova-metadata-0" Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.702774 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52ad277d-ba1e-4129-b696-f4fa1a598d72-logs\") pod \"nova-metadata-0\" (UID: \"52ad277d-ba1e-4129-b696-f4fa1a598d72\") " pod="openstack/nova-metadata-0" Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.702812 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/52ad277d-ba1e-4129-b696-f4fa1a598d72-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"52ad277d-ba1e-4129-b696-f4fa1a598d72\") " pod="openstack/nova-metadata-0" Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.702874 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52ad277d-ba1e-4129-b696-f4fa1a598d72-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"52ad277d-ba1e-4129-b696-f4fa1a598d72\") " pod="openstack/nova-metadata-0" Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.702913 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5p8v\" (UniqueName: \"kubernetes.io/projected/52ad277d-ba1e-4129-b696-f4fa1a598d72-kube-api-access-r5p8v\") pod \"nova-metadata-0\" (UID: \"52ad277d-ba1e-4129-b696-f4fa1a598d72\") " pod="openstack/nova-metadata-0" Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.703305 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52ad277d-ba1e-4129-b696-f4fa1a598d72-logs\") pod \"nova-metadata-0\" (UID: \"52ad277d-ba1e-4129-b696-f4fa1a598d72\") " pod="openstack/nova-metadata-0" Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.708445 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52ad277d-ba1e-4129-b696-f4fa1a598d72-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"52ad277d-ba1e-4129-b696-f4fa1a598d72\") " pod="openstack/nova-metadata-0" Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.708961 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52ad277d-ba1e-4129-b696-f4fa1a598d72-config-data\") pod \"nova-metadata-0\" (UID: \"52ad277d-ba1e-4129-b696-f4fa1a598d72\") " pod="openstack/nova-metadata-0" Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.713486 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/52ad277d-ba1e-4129-b696-f4fa1a598d72-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"52ad277d-ba1e-4129-b696-f4fa1a598d72\") " pod="openstack/nova-metadata-0" Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.737475 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5p8v\" (UniqueName: \"kubernetes.io/projected/52ad277d-ba1e-4129-b696-f4fa1a598d72-kube-api-access-r5p8v\") pod \"nova-metadata-0\" (UID: \"52ad277d-ba1e-4129-b696-f4fa1a598d72\") " pod="openstack/nova-metadata-0" Feb 02 17:34:22 crc kubenswrapper[4858]: I0202 17:34:22.796931 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 17:34:23 crc kubenswrapper[4858]: I0202 17:34:23.297280 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 17:34:23 crc kubenswrapper[4858]: I0202 17:34:23.382365 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"52ad277d-ba1e-4129-b696-f4fa1a598d72","Type":"ContainerStarted","Data":"7ec285008dc039884631d17c40756c9375833f40a9f5121c6e501ad4861f1a8d"} Feb 02 17:34:24 crc kubenswrapper[4858]: I0202 17:34:24.429134 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab13622-4d61-4621-b865-45ede50fcaeb" path="/var/lib/kubelet/pods/3ab13622-4d61-4621-b865-45ede50fcaeb/volumes" Feb 02 17:34:24 crc kubenswrapper[4858]: I0202 17:34:24.432858 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"52ad277d-ba1e-4129-b696-f4fa1a598d72","Type":"ContainerStarted","Data":"4f9b37f735ec74ae838275a293d8ef9959dd1f74378c09be194d4028602243f5"} Feb 02 17:34:24 crc kubenswrapper[4858]: I0202 17:34:24.433296 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"52ad277d-ba1e-4129-b696-f4fa1a598d72","Type":"ContainerStarted","Data":"d19b8b02385749b7007061b7fa0e1d8e16facc771626ee0d56900549f2423c8c"} Feb 02 17:34:24 crc kubenswrapper[4858]: I0202 17:34:24.444178 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.444151197 podStartE2EDuration="2.444151197s" podCreationTimestamp="2026-02-02 17:34:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:34:24.425224261 +0000 UTC m=+1165.577639596" watchObservedRunningTime="2026-02-02 17:34:24.444151197 +0000 UTC m=+1165.596566492" Feb 02 17:34:24 crc kubenswrapper[4858]: I0202 17:34:24.803069 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 02 17:34:27 crc kubenswrapper[4858]: I0202 17:34:27.798242 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 17:34:27 crc kubenswrapper[4858]: I0202 17:34:27.798730 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 17:34:29 crc kubenswrapper[4858]: I0202 17:34:29.787822 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 17:34:29 crc kubenswrapper[4858]: I0202 17:34:29.788198 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 17:34:29 crc kubenswrapper[4858]: I0202 17:34:29.802689 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 02 17:34:29 crc kubenswrapper[4858]: I0202 17:34:29.829780 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 02 17:34:30 crc kubenswrapper[4858]: I0202 17:34:30.509357 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 02 17:34:30 crc kubenswrapper[4858]: I0202 17:34:30.799290 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5402e6ff-48ec-47b2-b68e-3385e51ec388" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.209:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 17:34:30 crc kubenswrapper[4858]: I0202 17:34:30.799290 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5402e6ff-48ec-47b2-b68e-3385e51ec388" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.209:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 17:34:32 crc kubenswrapper[4858]: I0202 17:34:32.798187 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 17:34:32 crc kubenswrapper[4858]: I0202 17:34:32.798235 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 17:34:33 crc kubenswrapper[4858]: I0202 17:34:33.812243 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="52ad277d-ba1e-4129-b696-f4fa1a598d72" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.211:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 17:34:33 crc kubenswrapper[4858]: I0202 17:34:33.812251 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="52ad277d-ba1e-4129-b696-f4fa1a598d72" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.211:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 17:34:37 crc kubenswrapper[4858]: I0202 17:34:37.871303 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 02 17:34:39 crc kubenswrapper[4858]: I0202 17:34:39.798897 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 17:34:39 crc kubenswrapper[4858]: I0202 17:34:39.799554 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 17:34:39 crc kubenswrapper[4858]: I0202 17:34:39.809647 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 17:34:39 crc kubenswrapper[4858]: I0202 17:34:39.814573 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 17:34:40 crc kubenswrapper[4858]: I0202 17:34:40.561345 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 17:34:40 crc kubenswrapper[4858]: I0202 17:34:40.569532 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 17:34:42 crc kubenswrapper[4858]: I0202 17:34:42.803345 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 17:34:42 crc kubenswrapper[4858]: I0202 17:34:42.806091 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 17:34:42 crc kubenswrapper[4858]: I0202 17:34:42.809312 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 17:34:43 crc kubenswrapper[4858]: I0202 17:34:43.596202 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 17:34:51 crc kubenswrapper[4858]: I0202 17:34:51.045463 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 17:34:51 crc kubenswrapper[4858]: I0202 17:34:51.798191 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 17:34:54 crc kubenswrapper[4858]: I0202 17:34:54.827435 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="55d221f1-91f9-4045-b94b-95facb25b3dc" containerName="rabbitmq" containerID="cri-o://c0b46dd7a0e07204197d3abee37af0a73dbe003a0faade6651a8184776b0b279" gracePeriod=604797 Feb 02 17:34:55 crc kubenswrapper[4858]: I0202 17:34:55.706042 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="0bbd4c99-7b4c-4bb5-833d-aa638125ad9e" containerName="rabbitmq" containerID="cri-o://cb7403de122bf4f3502af410af10c251740d4ed4c1edde9c052461140e977e05" gracePeriod=604797 Feb 02 17:34:56 crc kubenswrapper[4858]: I0202 17:34:56.771701 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="55d221f1-91f9-4045-b94b-95facb25b3dc" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.93:5671: connect: connection refused" Feb 02 17:34:57 crc kubenswrapper[4858]: I0202 17:34:57.171959 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="0bbd4c99-7b4c-4bb5-833d-aa638125ad9e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.94:5671: connect: connection refused" Feb 02 17:34:57 crc kubenswrapper[4858]: I0202 17:34:57.808297 4858 patch_prober.go:28] interesting pod/machine-config-daemon-lbvl2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 17:34:57 crc kubenswrapper[4858]: I0202 17:34:57.808697 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.434847 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.499156 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/55d221f1-91f9-4045-b94b-95facb25b3dc-erlang-cookie-secret\") pod \"55d221f1-91f9-4045-b94b-95facb25b3dc\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") " Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.499246 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/55d221f1-91f9-4045-b94b-95facb25b3dc-server-conf\") pod \"55d221f1-91f9-4045-b94b-95facb25b3dc\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") " Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.499312 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/55d221f1-91f9-4045-b94b-95facb25b3dc-rabbitmq-tls\") pod \"55d221f1-91f9-4045-b94b-95facb25b3dc\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") " Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.499337 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/55d221f1-91f9-4045-b94b-95facb25b3dc-config-data\") pod \"55d221f1-91f9-4045-b94b-95facb25b3dc\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") " Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.499363 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dcjm\" (UniqueName: \"kubernetes.io/projected/55d221f1-91f9-4045-b94b-95facb25b3dc-kube-api-access-6dcjm\") pod \"55d221f1-91f9-4045-b94b-95facb25b3dc\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") " Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.499413 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"55d221f1-91f9-4045-b94b-95facb25b3dc\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") " Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.499447 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/55d221f1-91f9-4045-b94b-95facb25b3dc-pod-info\") pod \"55d221f1-91f9-4045-b94b-95facb25b3dc\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") " Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.499478 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/55d221f1-91f9-4045-b94b-95facb25b3dc-plugins-conf\") pod \"55d221f1-91f9-4045-b94b-95facb25b3dc\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") " Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.499517 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/55d221f1-91f9-4045-b94b-95facb25b3dc-rabbitmq-confd\") pod \"55d221f1-91f9-4045-b94b-95facb25b3dc\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") " Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.499553 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/55d221f1-91f9-4045-b94b-95facb25b3dc-rabbitmq-plugins\") pod \"55d221f1-91f9-4045-b94b-95facb25b3dc\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") " Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.499590 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/55d221f1-91f9-4045-b94b-95facb25b3dc-rabbitmq-erlang-cookie\") pod \"55d221f1-91f9-4045-b94b-95facb25b3dc\" (UID: \"55d221f1-91f9-4045-b94b-95facb25b3dc\") " Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.501598 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55d221f1-91f9-4045-b94b-95facb25b3dc-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "55d221f1-91f9-4045-b94b-95facb25b3dc" (UID: "55d221f1-91f9-4045-b94b-95facb25b3dc"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.502106 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55d221f1-91f9-4045-b94b-95facb25b3dc-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "55d221f1-91f9-4045-b94b-95facb25b3dc" (UID: "55d221f1-91f9-4045-b94b-95facb25b3dc"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.505841 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55d221f1-91f9-4045-b94b-95facb25b3dc-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "55d221f1-91f9-4045-b94b-95facb25b3dc" (UID: "55d221f1-91f9-4045-b94b-95facb25b3dc"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.508541 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55d221f1-91f9-4045-b94b-95facb25b3dc-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "55d221f1-91f9-4045-b94b-95facb25b3dc" (UID: "55d221f1-91f9-4045-b94b-95facb25b3dc"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.512138 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55d221f1-91f9-4045-b94b-95facb25b3dc-kube-api-access-6dcjm" (OuterVolumeSpecName: "kube-api-access-6dcjm") pod "55d221f1-91f9-4045-b94b-95facb25b3dc" (UID: "55d221f1-91f9-4045-b94b-95facb25b3dc"). InnerVolumeSpecName "kube-api-access-6dcjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.517863 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/55d221f1-91f9-4045-b94b-95facb25b3dc-pod-info" (OuterVolumeSpecName: "pod-info") pod "55d221f1-91f9-4045-b94b-95facb25b3dc" (UID: "55d221f1-91f9-4045-b94b-95facb25b3dc"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.525174 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "55d221f1-91f9-4045-b94b-95facb25b3dc" (UID: "55d221f1-91f9-4045-b94b-95facb25b3dc"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.536267 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55d221f1-91f9-4045-b94b-95facb25b3dc-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "55d221f1-91f9-4045-b94b-95facb25b3dc" (UID: "55d221f1-91f9-4045-b94b-95facb25b3dc"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.578192 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55d221f1-91f9-4045-b94b-95facb25b3dc-config-data" (OuterVolumeSpecName: "config-data") pod "55d221f1-91f9-4045-b94b-95facb25b3dc" (UID: "55d221f1-91f9-4045-b94b-95facb25b3dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.601701 4858 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/55d221f1-91f9-4045-b94b-95facb25b3dc-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.601735 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/55d221f1-91f9-4045-b94b-95facb25b3dc-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.601749 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/55d221f1-91f9-4045-b94b-95facb25b3dc-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.601760 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dcjm\" (UniqueName: \"kubernetes.io/projected/55d221f1-91f9-4045-b94b-95facb25b3dc-kube-api-access-6dcjm\") on node \"crc\" DevicePath \"\"" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.601786 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.601798 4858 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/55d221f1-91f9-4045-b94b-95facb25b3dc-pod-info\") on node \"crc\" DevicePath \"\"" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.601809 4858 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/55d221f1-91f9-4045-b94b-95facb25b3dc-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.601821 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/55d221f1-91f9-4045-b94b-95facb25b3dc-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.601833 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/55d221f1-91f9-4045-b94b-95facb25b3dc-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.608656 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55d221f1-91f9-4045-b94b-95facb25b3dc-server-conf" (OuterVolumeSpecName: "server-conf") pod "55d221f1-91f9-4045-b94b-95facb25b3dc" (UID: "55d221f1-91f9-4045-b94b-95facb25b3dc"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.628248 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55d221f1-91f9-4045-b94b-95facb25b3dc-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "55d221f1-91f9-4045-b94b-95facb25b3dc" (UID: "55d221f1-91f9-4045-b94b-95facb25b3dc"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.631587 4858 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.703712 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/55d221f1-91f9-4045-b94b-95facb25b3dc-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.703744 4858 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/55d221f1-91f9-4045-b94b-95facb25b3dc-server-conf\") on node \"crc\" DevicePath \"\"" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.703755 4858 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.762823 4858 generic.go:334] "Generic (PLEG): container finished" podID="55d221f1-91f9-4045-b94b-95facb25b3dc" containerID="c0b46dd7a0e07204197d3abee37af0a73dbe003a0faade6651a8184776b0b279" exitCode=0 Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.762910 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"55d221f1-91f9-4045-b94b-95facb25b3dc","Type":"ContainerDied","Data":"c0b46dd7a0e07204197d3abee37af0a73dbe003a0faade6651a8184776b0b279"} Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.762936 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"55d221f1-91f9-4045-b94b-95facb25b3dc","Type":"ContainerDied","Data":"f9d449e4bd13494166da1ff6f05c9980f3128b5c4c438d50e12b7d93d55bade3"} Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.762954 4858 scope.go:117] "RemoveContainer" containerID="c0b46dd7a0e07204197d3abee37af0a73dbe003a0faade6651a8184776b0b279" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.763126 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.795967 4858 scope.go:117] "RemoveContainer" containerID="7aadaa269dc736d732bdf76758c3351ee91ef7f1b2b6fb59c37adeb68100c533" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.819183 4858 scope.go:117] "RemoveContainer" containerID="c0b46dd7a0e07204197d3abee37af0a73dbe003a0faade6651a8184776b0b279" Feb 02 17:35:01 crc kubenswrapper[4858]: E0202 17:35:01.819603 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0b46dd7a0e07204197d3abee37af0a73dbe003a0faade6651a8184776b0b279\": container with ID starting with c0b46dd7a0e07204197d3abee37af0a73dbe003a0faade6651a8184776b0b279 not found: ID does not exist" containerID="c0b46dd7a0e07204197d3abee37af0a73dbe003a0faade6651a8184776b0b279" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.819631 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0b46dd7a0e07204197d3abee37af0a73dbe003a0faade6651a8184776b0b279"} err="failed to get container status \"c0b46dd7a0e07204197d3abee37af0a73dbe003a0faade6651a8184776b0b279\": rpc error: code = NotFound desc = could not find container \"c0b46dd7a0e07204197d3abee37af0a73dbe003a0faade6651a8184776b0b279\": container with ID starting with c0b46dd7a0e07204197d3abee37af0a73dbe003a0faade6651a8184776b0b279 not found: ID does not exist" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.819656 4858 scope.go:117] "RemoveContainer" containerID="7aadaa269dc736d732bdf76758c3351ee91ef7f1b2b6fb59c37adeb68100c533" Feb 02 17:35:01 crc kubenswrapper[4858]: E0202 17:35:01.819828 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7aadaa269dc736d732bdf76758c3351ee91ef7f1b2b6fb59c37adeb68100c533\": container with ID starting with 7aadaa269dc736d732bdf76758c3351ee91ef7f1b2b6fb59c37adeb68100c533 not found: ID does not exist" containerID="7aadaa269dc736d732bdf76758c3351ee91ef7f1b2b6fb59c37adeb68100c533" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.819855 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aadaa269dc736d732bdf76758c3351ee91ef7f1b2b6fb59c37adeb68100c533"} err="failed to get container status \"7aadaa269dc736d732bdf76758c3351ee91ef7f1b2b6fb59c37adeb68100c533\": rpc error: code = NotFound desc = could not find container \"7aadaa269dc736d732bdf76758c3351ee91ef7f1b2b6fb59c37adeb68100c533\": container with ID starting with 7aadaa269dc736d732bdf76758c3351ee91ef7f1b2b6fb59c37adeb68100c533 not found: ID does not exist" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.819918 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.825564 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.853876 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 17:35:01 crc kubenswrapper[4858]: E0202 17:35:01.854271 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55d221f1-91f9-4045-b94b-95facb25b3dc" containerName="rabbitmq" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.854287 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="55d221f1-91f9-4045-b94b-95facb25b3dc" containerName="rabbitmq" Feb 02 17:35:01 crc kubenswrapper[4858]: E0202 17:35:01.854311 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55d221f1-91f9-4045-b94b-95facb25b3dc" containerName="setup-container" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.854318 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="55d221f1-91f9-4045-b94b-95facb25b3dc" containerName="setup-container" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.854487 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="55d221f1-91f9-4045-b94b-95facb25b3dc" containerName="rabbitmq" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.855427 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.860191 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.860436 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.860708 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.861050 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.861202 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.861395 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.861556 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-wjk9v" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.871915 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.908642 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f470a8b9-224f-436f-bbbb-c6ab6b1f587e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f470a8b9-224f-436f-bbbb-c6ab6b1f587e\") " pod="openstack/rabbitmq-server-0" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.908725 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f470a8b9-224f-436f-bbbb-c6ab6b1f587e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f470a8b9-224f-436f-bbbb-c6ab6b1f587e\") " pod="openstack/rabbitmq-server-0" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.908767 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f470a8b9-224f-436f-bbbb-c6ab6b1f587e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f470a8b9-224f-436f-bbbb-c6ab6b1f587e\") " pod="openstack/rabbitmq-server-0" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.908799 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"f470a8b9-224f-436f-bbbb-c6ab6b1f587e\") " pod="openstack/rabbitmq-server-0" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.908832 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f470a8b9-224f-436f-bbbb-c6ab6b1f587e-config-data\") pod \"rabbitmq-server-0\" (UID: \"f470a8b9-224f-436f-bbbb-c6ab6b1f587e\") " pod="openstack/rabbitmq-server-0" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.908868 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f470a8b9-224f-436f-bbbb-c6ab6b1f587e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f470a8b9-224f-436f-bbbb-c6ab6b1f587e\") " pod="openstack/rabbitmq-server-0" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.908901 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f470a8b9-224f-436f-bbbb-c6ab6b1f587e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f470a8b9-224f-436f-bbbb-c6ab6b1f587e\") " pod="openstack/rabbitmq-server-0" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.908926 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f470a8b9-224f-436f-bbbb-c6ab6b1f587e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f470a8b9-224f-436f-bbbb-c6ab6b1f587e\") " pod="openstack/rabbitmq-server-0" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.908942 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd6zf\" (UniqueName: \"kubernetes.io/projected/f470a8b9-224f-436f-bbbb-c6ab6b1f587e-kube-api-access-qd6zf\") pod \"rabbitmq-server-0\" (UID: \"f470a8b9-224f-436f-bbbb-c6ab6b1f587e\") " pod="openstack/rabbitmq-server-0" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.909010 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f470a8b9-224f-436f-bbbb-c6ab6b1f587e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f470a8b9-224f-436f-bbbb-c6ab6b1f587e\") " pod="openstack/rabbitmq-server-0" Feb 02 17:35:01 crc kubenswrapper[4858]: I0202 17:35:01.909030 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f470a8b9-224f-436f-bbbb-c6ab6b1f587e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f470a8b9-224f-436f-bbbb-c6ab6b1f587e\") " pod="openstack/rabbitmq-server-0" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.010373 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"f470a8b9-224f-436f-bbbb-c6ab6b1f587e\") " pod="openstack/rabbitmq-server-0" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.010436 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f470a8b9-224f-436f-bbbb-c6ab6b1f587e-config-data\") pod \"rabbitmq-server-0\" (UID: \"f470a8b9-224f-436f-bbbb-c6ab6b1f587e\") " pod="openstack/rabbitmq-server-0" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.010462 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f470a8b9-224f-436f-bbbb-c6ab6b1f587e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f470a8b9-224f-436f-bbbb-c6ab6b1f587e\") " pod="openstack/rabbitmq-server-0" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.010491 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f470a8b9-224f-436f-bbbb-c6ab6b1f587e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f470a8b9-224f-436f-bbbb-c6ab6b1f587e\") " pod="openstack/rabbitmq-server-0" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.010517 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f470a8b9-224f-436f-bbbb-c6ab6b1f587e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f470a8b9-224f-436f-bbbb-c6ab6b1f587e\") " pod="openstack/rabbitmq-server-0" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.010534 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd6zf\" (UniqueName: \"kubernetes.io/projected/f470a8b9-224f-436f-bbbb-c6ab6b1f587e-kube-api-access-qd6zf\") pod \"rabbitmq-server-0\" (UID: \"f470a8b9-224f-436f-bbbb-c6ab6b1f587e\") " pod="openstack/rabbitmq-server-0" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.010575 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f470a8b9-224f-436f-bbbb-c6ab6b1f587e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f470a8b9-224f-436f-bbbb-c6ab6b1f587e\") " pod="openstack/rabbitmq-server-0" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.010591 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f470a8b9-224f-436f-bbbb-c6ab6b1f587e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f470a8b9-224f-436f-bbbb-c6ab6b1f587e\") " pod="openstack/rabbitmq-server-0" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.010650 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f470a8b9-224f-436f-bbbb-c6ab6b1f587e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f470a8b9-224f-436f-bbbb-c6ab6b1f587e\") " pod="openstack/rabbitmq-server-0" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.010671 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f470a8b9-224f-436f-bbbb-c6ab6b1f587e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f470a8b9-224f-436f-bbbb-c6ab6b1f587e\") " pod="openstack/rabbitmq-server-0" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.010691 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f470a8b9-224f-436f-bbbb-c6ab6b1f587e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f470a8b9-224f-436f-bbbb-c6ab6b1f587e\") " pod="openstack/rabbitmq-server-0" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.011450 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f470a8b9-224f-436f-bbbb-c6ab6b1f587e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f470a8b9-224f-436f-bbbb-c6ab6b1f587e\") " pod="openstack/rabbitmq-server-0" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.011622 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"f470a8b9-224f-436f-bbbb-c6ab6b1f587e\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.012647 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f470a8b9-224f-436f-bbbb-c6ab6b1f587e-config-data\") pod \"rabbitmq-server-0\" (UID: \"f470a8b9-224f-436f-bbbb-c6ab6b1f587e\") " pod="openstack/rabbitmq-server-0" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.013519 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f470a8b9-224f-436f-bbbb-c6ab6b1f587e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f470a8b9-224f-436f-bbbb-c6ab6b1f587e\") " pod="openstack/rabbitmq-server-0" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.016061 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f470a8b9-224f-436f-bbbb-c6ab6b1f587e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f470a8b9-224f-436f-bbbb-c6ab6b1f587e\") " pod="openstack/rabbitmq-server-0" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.016209 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f470a8b9-224f-436f-bbbb-c6ab6b1f587e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f470a8b9-224f-436f-bbbb-c6ab6b1f587e\") " pod="openstack/rabbitmq-server-0" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.018416 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f470a8b9-224f-436f-bbbb-c6ab6b1f587e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f470a8b9-224f-436f-bbbb-c6ab6b1f587e\") " pod="openstack/rabbitmq-server-0" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.018632 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f470a8b9-224f-436f-bbbb-c6ab6b1f587e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f470a8b9-224f-436f-bbbb-c6ab6b1f587e\") " pod="openstack/rabbitmq-server-0" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.018758 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f470a8b9-224f-436f-bbbb-c6ab6b1f587e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f470a8b9-224f-436f-bbbb-c6ab6b1f587e\") " pod="openstack/rabbitmq-server-0" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.019296 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f470a8b9-224f-436f-bbbb-c6ab6b1f587e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f470a8b9-224f-436f-bbbb-c6ab6b1f587e\") " pod="openstack/rabbitmq-server-0" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.039661 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd6zf\" (UniqueName: \"kubernetes.io/projected/f470a8b9-224f-436f-bbbb-c6ab6b1f587e-kube-api-access-qd6zf\") pod \"rabbitmq-server-0\" (UID: \"f470a8b9-224f-436f-bbbb-c6ab6b1f587e\") " pod="openstack/rabbitmq-server-0" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.055419 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"f470a8b9-224f-436f-bbbb-c6ab6b1f587e\") " pod="openstack/rabbitmq-server-0" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.274037 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.359840 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-bsc4m"] Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.361865 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-bsc4m" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.363611 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.377151 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-bsc4m"] Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.408727 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.413705 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55d221f1-91f9-4045-b94b-95facb25b3dc" path="/var/lib/kubelet/pods/55d221f1-91f9-4045-b94b-95facb25b3dc/volumes" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.420097 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjzkt\" (UniqueName: \"kubernetes.io/projected/f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5-kube-api-access-vjzkt\") pod \"dnsmasq-dns-79bd4cc8c9-bsc4m\" (UID: \"f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-bsc4m" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.420413 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5-config\") pod \"dnsmasq-dns-79bd4cc8c9-bsc4m\" (UID: \"f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-bsc4m" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.420442 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-bsc4m\" (UID: \"f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-bsc4m" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.420465 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-bsc4m\" (UID: \"f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-bsc4m" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.420498 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-bsc4m\" (UID: \"f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-bsc4m" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.420583 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-bsc4m\" (UID: \"f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-bsc4m" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.420611 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-bsc4m\" (UID: \"f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-bsc4m" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.522105 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hkj9\" (UniqueName: \"kubernetes.io/projected/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-kube-api-access-2hkj9\") pod \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") " Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.522187 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-server-conf\") pod \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") " Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.522250 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-rabbitmq-erlang-cookie\") pod \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") " Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.523112 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "0bbd4c99-7b4c-4bb5-833d-aa638125ad9e" (UID: "0bbd4c99-7b4c-4bb5-833d-aa638125ad9e"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.524350 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") " Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.524418 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-rabbitmq-tls\") pod \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") " Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.524443 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-config-data\") pod \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") " Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.524484 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-pod-info\") pod \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") " Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.524523 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-erlang-cookie-secret\") pod \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") " Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.524570 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-rabbitmq-plugins\") pod \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") " Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.524647 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-plugins-conf\") pod \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") " Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.524722 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-rabbitmq-confd\") pod \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\" (UID: \"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e\") " Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.524927 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-bsc4m\" (UID: \"f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-bsc4m" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.524981 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-bsc4m\" (UID: \"f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-bsc4m" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.525100 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjzkt\" (UniqueName: \"kubernetes.io/projected/f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5-kube-api-access-vjzkt\") pod \"dnsmasq-dns-79bd4cc8c9-bsc4m\" (UID: \"f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-bsc4m" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.525167 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5-config\") pod \"dnsmasq-dns-79bd4cc8c9-bsc4m\" (UID: \"f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-bsc4m" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.525188 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-bsc4m\" (UID: \"f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-bsc4m" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.525223 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-bsc4m\" (UID: \"f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-bsc4m" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.525271 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-bsc4m\" (UID: \"f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-bsc4m" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.525362 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.525422 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "0bbd4c99-7b4c-4bb5-833d-aa638125ad9e" (UID: "0bbd4c99-7b4c-4bb5-833d-aa638125ad9e"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.526630 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-bsc4m\" (UID: \"f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-bsc4m" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.527161 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-bsc4m\" (UID: \"f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-bsc4m" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.527478 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-bsc4m\" (UID: \"f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-bsc4m" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.528407 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "0bbd4c99-7b4c-4bb5-833d-aa638125ad9e" (UID: "0bbd4c99-7b4c-4bb5-833d-aa638125ad9e"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.528652 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-bsc4m\" (UID: \"f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-bsc4m" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.528652 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5-config\") pod \"dnsmasq-dns-79bd4cc8c9-bsc4m\" (UID: \"f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-bsc4m" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.530830 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "0bbd4c99-7b4c-4bb5-833d-aa638125ad9e" (UID: "0bbd4c99-7b4c-4bb5-833d-aa638125ad9e"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.532474 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-bsc4m\" (UID: \"f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-bsc4m" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.539026 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-kube-api-access-2hkj9" (OuterVolumeSpecName: "kube-api-access-2hkj9") pod "0bbd4c99-7b4c-4bb5-833d-aa638125ad9e" (UID: "0bbd4c99-7b4c-4bb5-833d-aa638125ad9e"). InnerVolumeSpecName "kube-api-access-2hkj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.539093 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "0bbd4c99-7b4c-4bb5-833d-aa638125ad9e" (UID: "0bbd4c99-7b4c-4bb5-833d-aa638125ad9e"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.539214 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "0bbd4c99-7b4c-4bb5-833d-aa638125ad9e" (UID: "0bbd4c99-7b4c-4bb5-833d-aa638125ad9e"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.543171 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-pod-info" (OuterVolumeSpecName: "pod-info") pod "0bbd4c99-7b4c-4bb5-833d-aa638125ad9e" (UID: "0bbd4c99-7b4c-4bb5-833d-aa638125ad9e"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.555992 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjzkt\" (UniqueName: \"kubernetes.io/projected/f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5-kube-api-access-vjzkt\") pod \"dnsmasq-dns-79bd4cc8c9-bsc4m\" (UID: \"f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-bsc4m" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.577573 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-config-data" (OuterVolumeSpecName: "config-data") pod "0bbd4c99-7b4c-4bb5-833d-aa638125ad9e" (UID: "0bbd4c99-7b4c-4bb5-833d-aa638125ad9e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.587679 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-server-conf" (OuterVolumeSpecName: "server-conf") pod "0bbd4c99-7b4c-4bb5-833d-aa638125ad9e" (UID: "0bbd4c99-7b4c-4bb5-833d-aa638125ad9e"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.629458 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hkj9\" (UniqueName: \"kubernetes.io/projected/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-kube-api-access-2hkj9\") on node \"crc\" DevicePath \"\"" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.629493 4858 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-server-conf\") on node \"crc\" DevicePath \"\"" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.629527 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.629537 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.629546 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.629554 4858 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-pod-info\") on node \"crc\" DevicePath \"\"" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.629562 4858 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.629571 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.629579 4858 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.644865 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "0bbd4c99-7b4c-4bb5-833d-aa638125ad9e" (UID: "0bbd4c99-7b4c-4bb5-833d-aa638125ad9e"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.656201 4858 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.728082 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-bsc4m" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.731627 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.731775 4858 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.775118 4858 generic.go:334] "Generic (PLEG): container finished" podID="0bbd4c99-7b4c-4bb5-833d-aa638125ad9e" containerID="cb7403de122bf4f3502af410af10c251740d4ed4c1edde9c052461140e977e05" exitCode=0 Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.775183 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.775210 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e","Type":"ContainerDied","Data":"cb7403de122bf4f3502af410af10c251740d4ed4c1edde9c052461140e977e05"} Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.775827 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0bbd4c99-7b4c-4bb5-833d-aa638125ad9e","Type":"ContainerDied","Data":"9474f2ccb29f07399c2e7b8bfe8174b364bd7677af0b818865cdde02a0a376a5"} Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.775874 4858 scope.go:117] "RemoveContainer" containerID="cb7403de122bf4f3502af410af10c251740d4ed4c1edde9c052461140e977e05" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.814460 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.816173 4858 scope.go:117] "RemoveContainer" containerID="ac808642e58db1d8622822bc03c9ea33e8538cb4f029c9db6a85997ead44db3c" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.826289 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 17:35:02 crc kubenswrapper[4858]: W0202 17:35:02.827124 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf470a8b9_224f_436f_bbbb_c6ab6b1f587e.slice/crio-b164f5bcb75cc95fa59d966c43e435aed56a47b5d2a81087673c0f542e47d87f WatchSource:0}: Error finding container b164f5bcb75cc95fa59d966c43e435aed56a47b5d2a81087673c0f542e47d87f: Status 404 returned error can't find the container with id b164f5bcb75cc95fa59d966c43e435aed56a47b5d2a81087673c0f542e47d87f Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.859087 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.870772 4858 scope.go:117] "RemoveContainer" containerID="cb7403de122bf4f3502af410af10c251740d4ed4c1edde9c052461140e977e05" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.875336 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 17:35:02 crc kubenswrapper[4858]: E0202 17:35:02.875759 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bbd4c99-7b4c-4bb5-833d-aa638125ad9e" containerName="setup-container" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.875772 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bbd4c99-7b4c-4bb5-833d-aa638125ad9e" containerName="setup-container" Feb 02 17:35:02 crc kubenswrapper[4858]: E0202 17:35:02.875791 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bbd4c99-7b4c-4bb5-833d-aa638125ad9e" containerName="rabbitmq" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.875796 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bbd4c99-7b4c-4bb5-833d-aa638125ad9e" containerName="rabbitmq" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.875988 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bbd4c99-7b4c-4bb5-833d-aa638125ad9e" containerName="rabbitmq" Feb 02 17:35:02 crc kubenswrapper[4858]: E0202 17:35:02.876170 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb7403de122bf4f3502af410af10c251740d4ed4c1edde9c052461140e977e05\": container with ID starting with cb7403de122bf4f3502af410af10c251740d4ed4c1edde9c052461140e977e05 not found: ID does not exist" containerID="cb7403de122bf4f3502af410af10c251740d4ed4c1edde9c052461140e977e05" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.876240 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb7403de122bf4f3502af410af10c251740d4ed4c1edde9c052461140e977e05"} err="failed to get container status \"cb7403de122bf4f3502af410af10c251740d4ed4c1edde9c052461140e977e05\": rpc error: code = NotFound desc = could not find container \"cb7403de122bf4f3502af410af10c251740d4ed4c1edde9c052461140e977e05\": container with ID starting with cb7403de122bf4f3502af410af10c251740d4ed4c1edde9c052461140e977e05 not found: ID does not exist" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.876270 4858 scope.go:117] "RemoveContainer" containerID="ac808642e58db1d8622822bc03c9ea33e8538cb4f029c9db6a85997ead44db3c" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.876942 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:35:02 crc kubenswrapper[4858]: E0202 17:35:02.878604 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac808642e58db1d8622822bc03c9ea33e8538cb4f029c9db6a85997ead44db3c\": container with ID starting with ac808642e58db1d8622822bc03c9ea33e8538cb4f029c9db6a85997ead44db3c not found: ID does not exist" containerID="ac808642e58db1d8622822bc03c9ea33e8538cb4f029c9db6a85997ead44db3c" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.878629 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac808642e58db1d8622822bc03c9ea33e8538cb4f029c9db6a85997ead44db3c"} err="failed to get container status \"ac808642e58db1d8622822bc03c9ea33e8538cb4f029c9db6a85997ead44db3c\": rpc error: code = NotFound desc = could not find container \"ac808642e58db1d8622822bc03c9ea33e8538cb4f029c9db6a85997ead44db3c\": container with ID starting with ac808642e58db1d8622822bc03c9ea33e8538cb4f029c9db6a85997ead44db3c not found: ID does not exist" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.879559 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.879758 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.879942 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.880089 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.880221 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.881248 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-dcr6q" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.881414 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.889031 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.935239 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/09b56fe4-3166-4448-a186-95f3c74199f1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"09b56fe4-3166-4448-a186-95f3c74199f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.935308 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/09b56fe4-3166-4448-a186-95f3c74199f1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"09b56fe4-3166-4448-a186-95f3c74199f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.935368 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/09b56fe4-3166-4448-a186-95f3c74199f1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"09b56fe4-3166-4448-a186-95f3c74199f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.935425 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"09b56fe4-3166-4448-a186-95f3c74199f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.935450 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/09b56fe4-3166-4448-a186-95f3c74199f1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"09b56fe4-3166-4448-a186-95f3c74199f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.935671 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/09b56fe4-3166-4448-a186-95f3c74199f1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"09b56fe4-3166-4448-a186-95f3c74199f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.935698 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/09b56fe4-3166-4448-a186-95f3c74199f1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"09b56fe4-3166-4448-a186-95f3c74199f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.935721 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/09b56fe4-3166-4448-a186-95f3c74199f1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"09b56fe4-3166-4448-a186-95f3c74199f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.935761 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz4zb\" (UniqueName: \"kubernetes.io/projected/09b56fe4-3166-4448-a186-95f3c74199f1-kube-api-access-sz4zb\") pod \"rabbitmq-cell1-server-0\" (UID: \"09b56fe4-3166-4448-a186-95f3c74199f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.935796 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/09b56fe4-3166-4448-a186-95f3c74199f1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"09b56fe4-3166-4448-a186-95f3c74199f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:35:02 crc kubenswrapper[4858]: I0202 17:35:02.935827 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/09b56fe4-3166-4448-a186-95f3c74199f1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"09b56fe4-3166-4448-a186-95f3c74199f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:35:03 crc kubenswrapper[4858]: I0202 17:35:03.064311 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/09b56fe4-3166-4448-a186-95f3c74199f1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"09b56fe4-3166-4448-a186-95f3c74199f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:35:03 crc kubenswrapper[4858]: I0202 17:35:03.064367 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/09b56fe4-3166-4448-a186-95f3c74199f1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"09b56fe4-3166-4448-a186-95f3c74199f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:35:03 crc kubenswrapper[4858]: I0202 17:35:03.064391 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/09b56fe4-3166-4448-a186-95f3c74199f1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"09b56fe4-3166-4448-a186-95f3c74199f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:35:03 crc kubenswrapper[4858]: I0202 17:35:03.064429 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz4zb\" (UniqueName: \"kubernetes.io/projected/09b56fe4-3166-4448-a186-95f3c74199f1-kube-api-access-sz4zb\") pod \"rabbitmq-cell1-server-0\" (UID: \"09b56fe4-3166-4448-a186-95f3c74199f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:35:03 crc kubenswrapper[4858]: I0202 17:35:03.064476 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/09b56fe4-3166-4448-a186-95f3c74199f1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"09b56fe4-3166-4448-a186-95f3c74199f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:35:03 crc kubenswrapper[4858]: I0202 17:35:03.064517 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/09b56fe4-3166-4448-a186-95f3c74199f1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"09b56fe4-3166-4448-a186-95f3c74199f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:35:03 crc kubenswrapper[4858]: I0202 17:35:03.064573 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/09b56fe4-3166-4448-a186-95f3c74199f1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"09b56fe4-3166-4448-a186-95f3c74199f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:35:03 crc kubenswrapper[4858]: I0202 17:35:03.064595 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/09b56fe4-3166-4448-a186-95f3c74199f1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"09b56fe4-3166-4448-a186-95f3c74199f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:35:03 crc kubenswrapper[4858]: I0202 17:35:03.064626 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/09b56fe4-3166-4448-a186-95f3c74199f1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"09b56fe4-3166-4448-a186-95f3c74199f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:35:03 crc kubenswrapper[4858]: I0202 17:35:03.064688 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"09b56fe4-3166-4448-a186-95f3c74199f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:35:03 crc kubenswrapper[4858]: I0202 17:35:03.064721 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/09b56fe4-3166-4448-a186-95f3c74199f1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"09b56fe4-3166-4448-a186-95f3c74199f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:35:03 crc kubenswrapper[4858]: I0202 17:35:03.066226 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/09b56fe4-3166-4448-a186-95f3c74199f1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"09b56fe4-3166-4448-a186-95f3c74199f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:35:03 crc kubenswrapper[4858]: I0202 17:35:03.066769 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/09b56fe4-3166-4448-a186-95f3c74199f1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"09b56fe4-3166-4448-a186-95f3c74199f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:35:03 crc kubenswrapper[4858]: I0202 17:35:03.072513 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/09b56fe4-3166-4448-a186-95f3c74199f1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"09b56fe4-3166-4448-a186-95f3c74199f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:35:03 crc kubenswrapper[4858]: I0202 17:35:03.072853 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/09b56fe4-3166-4448-a186-95f3c74199f1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"09b56fe4-3166-4448-a186-95f3c74199f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:35:03 crc kubenswrapper[4858]: I0202 17:35:03.073596 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/09b56fe4-3166-4448-a186-95f3c74199f1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"09b56fe4-3166-4448-a186-95f3c74199f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:35:03 crc kubenswrapper[4858]: I0202 17:35:03.073867 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/09b56fe4-3166-4448-a186-95f3c74199f1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"09b56fe4-3166-4448-a186-95f3c74199f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:35:03 crc kubenswrapper[4858]: I0202 17:35:03.074023 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"09b56fe4-3166-4448-a186-95f3c74199f1\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:35:03 crc kubenswrapper[4858]: I0202 17:35:03.076393 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/09b56fe4-3166-4448-a186-95f3c74199f1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"09b56fe4-3166-4448-a186-95f3c74199f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:35:03 crc kubenswrapper[4858]: I0202 17:35:03.080205 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/09b56fe4-3166-4448-a186-95f3c74199f1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"09b56fe4-3166-4448-a186-95f3c74199f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:35:03 crc kubenswrapper[4858]: I0202 17:35:03.084816 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/09b56fe4-3166-4448-a186-95f3c74199f1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"09b56fe4-3166-4448-a186-95f3c74199f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:35:03 crc kubenswrapper[4858]: I0202 17:35:03.094589 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz4zb\" (UniqueName: \"kubernetes.io/projected/09b56fe4-3166-4448-a186-95f3c74199f1-kube-api-access-sz4zb\") pod \"rabbitmq-cell1-server-0\" (UID: \"09b56fe4-3166-4448-a186-95f3c74199f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:35:03 crc kubenswrapper[4858]: I0202 17:35:03.111776 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"09b56fe4-3166-4448-a186-95f3c74199f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:35:03 crc kubenswrapper[4858]: W0202 17:35:03.123437 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0c2e3fc_0211_4ef0_8175_05e6a8ccd8d5.slice/crio-697032694b66289d520086fee6a296361847d85c41c0fde603efda137c9d994a WatchSource:0}: Error finding container 697032694b66289d520086fee6a296361847d85c41c0fde603efda137c9d994a: Status 404 returned error can't find the container with id 697032694b66289d520086fee6a296361847d85c41c0fde603efda137c9d994a Feb 02 17:35:03 crc kubenswrapper[4858]: I0202 17:35:03.137873 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-bsc4m"] Feb 02 17:35:03 crc kubenswrapper[4858]: I0202 17:35:03.238254 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:35:03 crc kubenswrapper[4858]: I0202 17:35:03.702208 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 17:35:03 crc kubenswrapper[4858]: I0202 17:35:03.790632 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f470a8b9-224f-436f-bbbb-c6ab6b1f587e","Type":"ContainerStarted","Data":"b164f5bcb75cc95fa59d966c43e435aed56a47b5d2a81087673c0f542e47d87f"} Feb 02 17:35:03 crc kubenswrapper[4858]: I0202 17:35:03.793756 4858 generic.go:334] "Generic (PLEG): container finished" podID="f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5" containerID="0b35a3c551ba065c301ff474c523aadc072e944f92defe607e0fe3ed45cdcd6e" exitCode=0 Feb 02 17:35:03 crc kubenswrapper[4858]: I0202 17:35:03.793858 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-bsc4m" event={"ID":"f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5","Type":"ContainerDied","Data":"0b35a3c551ba065c301ff474c523aadc072e944f92defe607e0fe3ed45cdcd6e"} Feb 02 17:35:03 crc kubenswrapper[4858]: I0202 17:35:03.793888 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-bsc4m" event={"ID":"f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5","Type":"ContainerStarted","Data":"697032694b66289d520086fee6a296361847d85c41c0fde603efda137c9d994a"} Feb 02 17:35:03 crc kubenswrapper[4858]: I0202 17:35:03.795809 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"09b56fe4-3166-4448-a186-95f3c74199f1","Type":"ContainerStarted","Data":"cf140a79c26e793d0c8224bb00ee46e8effa5ba6a75ce7eda1609b02e9ec7c71"} Feb 02 17:35:04 crc kubenswrapper[4858]: I0202 17:35:04.415581 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bbd4c99-7b4c-4bb5-833d-aa638125ad9e" path="/var/lib/kubelet/pods/0bbd4c99-7b4c-4bb5-833d-aa638125ad9e/volumes" Feb 02 17:35:04 crc kubenswrapper[4858]: I0202 17:35:04.811260 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-bsc4m" event={"ID":"f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5","Type":"ContainerStarted","Data":"aacc101b6f5a142c70991e47b0f2c804b97f4f2a7817eb7a5ee2f4ea3a324c8f"} Feb 02 17:35:04 crc kubenswrapper[4858]: I0202 17:35:04.813151 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bd4cc8c9-bsc4m" Feb 02 17:35:04 crc kubenswrapper[4858]: I0202 17:35:04.815624 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f470a8b9-224f-436f-bbbb-c6ab6b1f587e","Type":"ContainerStarted","Data":"a0ac70e31fd4e25b878af91d28365083cf12abad375da0b3eb7f845eb952c292"} Feb 02 17:35:04 crc kubenswrapper[4858]: I0202 17:35:04.833193 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bd4cc8c9-bsc4m" podStartSLOduration=2.833180529 podStartE2EDuration="2.833180529s" podCreationTimestamp="2026-02-02 17:35:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:35:04.831168092 +0000 UTC m=+1205.983583367" watchObservedRunningTime="2026-02-02 17:35:04.833180529 +0000 UTC m=+1205.985595794" Feb 02 17:35:05 crc kubenswrapper[4858]: I0202 17:35:05.826054 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"09b56fe4-3166-4448-a186-95f3c74199f1","Type":"ContainerStarted","Data":"87289e7bf30264ae635291899e6b3fe5508f017fc0f128c5ec99667c272fbfef"} Feb 02 17:35:12 crc kubenswrapper[4858]: I0202 17:35:12.729881 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bd4cc8c9-bsc4m" Feb 02 17:35:12 crc kubenswrapper[4858]: I0202 17:35:12.799669 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-8hchg"] Feb 02 17:35:12 crc kubenswrapper[4858]: I0202 17:35:12.890237 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-8hchg" podUID="3bdf9361-19a2-4c6d-a909-6c50d53e5d76" containerName="dnsmasq-dns" containerID="cri-o://749dc2e33b3990e97f669fcd14a8bec84f89aa688a0fc50736070f6756f2197f" gracePeriod=10 Feb 02 17:35:12 crc kubenswrapper[4858]: I0202 17:35:12.965456 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55478c4467-96qx9"] Feb 02 17:35:12 crc kubenswrapper[4858]: I0202 17:35:12.966960 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-96qx9" Feb 02 17:35:12 crc kubenswrapper[4858]: I0202 17:35:12.993724 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-96qx9"] Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.060740 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/435e285f-7731-45f3-8c96-282da49d50bf-dns-svc\") pod \"dnsmasq-dns-55478c4467-96qx9\" (UID: \"435e285f-7731-45f3-8c96-282da49d50bf\") " pod="openstack/dnsmasq-dns-55478c4467-96qx9" Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.060847 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/435e285f-7731-45f3-8c96-282da49d50bf-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-96qx9\" (UID: \"435e285f-7731-45f3-8c96-282da49d50bf\") " pod="openstack/dnsmasq-dns-55478c4467-96qx9" Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.061016 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/435e285f-7731-45f3-8c96-282da49d50bf-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-96qx9\" (UID: \"435e285f-7731-45f3-8c96-282da49d50bf\") " pod="openstack/dnsmasq-dns-55478c4467-96qx9" Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.061090 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/435e285f-7731-45f3-8c96-282da49d50bf-config\") pod \"dnsmasq-dns-55478c4467-96qx9\" (UID: \"435e285f-7731-45f3-8c96-282da49d50bf\") " pod="openstack/dnsmasq-dns-55478c4467-96qx9" Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.061146 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/435e285f-7731-45f3-8c96-282da49d50bf-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-96qx9\" (UID: \"435e285f-7731-45f3-8c96-282da49d50bf\") " pod="openstack/dnsmasq-dns-55478c4467-96qx9" Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.061212 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb6nv\" (UniqueName: \"kubernetes.io/projected/435e285f-7731-45f3-8c96-282da49d50bf-kube-api-access-qb6nv\") pod \"dnsmasq-dns-55478c4467-96qx9\" (UID: \"435e285f-7731-45f3-8c96-282da49d50bf\") " pod="openstack/dnsmasq-dns-55478c4467-96qx9" Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.061251 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/435e285f-7731-45f3-8c96-282da49d50bf-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-96qx9\" (UID: \"435e285f-7731-45f3-8c96-282da49d50bf\") " pod="openstack/dnsmasq-dns-55478c4467-96qx9" Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.162887 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/435e285f-7731-45f3-8c96-282da49d50bf-dns-svc\") pod \"dnsmasq-dns-55478c4467-96qx9\" (UID: \"435e285f-7731-45f3-8c96-282da49d50bf\") " pod="openstack/dnsmasq-dns-55478c4467-96qx9" Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.163489 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/435e285f-7731-45f3-8c96-282da49d50bf-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-96qx9\" (UID: \"435e285f-7731-45f3-8c96-282da49d50bf\") " pod="openstack/dnsmasq-dns-55478c4467-96qx9" Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.163521 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/435e285f-7731-45f3-8c96-282da49d50bf-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-96qx9\" (UID: \"435e285f-7731-45f3-8c96-282da49d50bf\") " pod="openstack/dnsmasq-dns-55478c4467-96qx9" Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.163547 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/435e285f-7731-45f3-8c96-282da49d50bf-config\") pod \"dnsmasq-dns-55478c4467-96qx9\" (UID: \"435e285f-7731-45f3-8c96-282da49d50bf\") " pod="openstack/dnsmasq-dns-55478c4467-96qx9" Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.163596 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/435e285f-7731-45f3-8c96-282da49d50bf-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-96qx9\" (UID: \"435e285f-7731-45f3-8c96-282da49d50bf\") " pod="openstack/dnsmasq-dns-55478c4467-96qx9" Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.163647 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb6nv\" (UniqueName: \"kubernetes.io/projected/435e285f-7731-45f3-8c96-282da49d50bf-kube-api-access-qb6nv\") pod \"dnsmasq-dns-55478c4467-96qx9\" (UID: \"435e285f-7731-45f3-8c96-282da49d50bf\") " pod="openstack/dnsmasq-dns-55478c4467-96qx9" Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.163683 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/435e285f-7731-45f3-8c96-282da49d50bf-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-96qx9\" (UID: \"435e285f-7731-45f3-8c96-282da49d50bf\") " pod="openstack/dnsmasq-dns-55478c4467-96qx9" Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.164080 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/435e285f-7731-45f3-8c96-282da49d50bf-dns-svc\") pod \"dnsmasq-dns-55478c4467-96qx9\" (UID: \"435e285f-7731-45f3-8c96-282da49d50bf\") " pod="openstack/dnsmasq-dns-55478c4467-96qx9" Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.164554 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/435e285f-7731-45f3-8c96-282da49d50bf-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-96qx9\" (UID: \"435e285f-7731-45f3-8c96-282da49d50bf\") " pod="openstack/dnsmasq-dns-55478c4467-96qx9" Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.164666 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/435e285f-7731-45f3-8c96-282da49d50bf-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-96qx9\" (UID: \"435e285f-7731-45f3-8c96-282da49d50bf\") " pod="openstack/dnsmasq-dns-55478c4467-96qx9" Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.164848 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/435e285f-7731-45f3-8c96-282da49d50bf-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-96qx9\" (UID: \"435e285f-7731-45f3-8c96-282da49d50bf\") " pod="openstack/dnsmasq-dns-55478c4467-96qx9" Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.165410 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/435e285f-7731-45f3-8c96-282da49d50bf-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-96qx9\" (UID: \"435e285f-7731-45f3-8c96-282da49d50bf\") " pod="openstack/dnsmasq-dns-55478c4467-96qx9" Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.168332 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/435e285f-7731-45f3-8c96-282da49d50bf-config\") pod \"dnsmasq-dns-55478c4467-96qx9\" (UID: \"435e285f-7731-45f3-8c96-282da49d50bf\") " pod="openstack/dnsmasq-dns-55478c4467-96qx9" Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.216229 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb6nv\" (UniqueName: \"kubernetes.io/projected/435e285f-7731-45f3-8c96-282da49d50bf-kube-api-access-qb6nv\") pod \"dnsmasq-dns-55478c4467-96qx9\" (UID: \"435e285f-7731-45f3-8c96-282da49d50bf\") " pod="openstack/dnsmasq-dns-55478c4467-96qx9" Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.349621 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-96qx9" Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.352369 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-8hchg" Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.469605 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3bdf9361-19a2-4c6d-a909-6c50d53e5d76-ovsdbserver-nb\") pod \"3bdf9361-19a2-4c6d-a909-6c50d53e5d76\" (UID: \"3bdf9361-19a2-4c6d-a909-6c50d53e5d76\") " Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.469670 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jv82\" (UniqueName: \"kubernetes.io/projected/3bdf9361-19a2-4c6d-a909-6c50d53e5d76-kube-api-access-5jv82\") pod \"3bdf9361-19a2-4c6d-a909-6c50d53e5d76\" (UID: \"3bdf9361-19a2-4c6d-a909-6c50d53e5d76\") " Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.469751 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3bdf9361-19a2-4c6d-a909-6c50d53e5d76-ovsdbserver-sb\") pod \"3bdf9361-19a2-4c6d-a909-6c50d53e5d76\" (UID: \"3bdf9361-19a2-4c6d-a909-6c50d53e5d76\") " Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.469794 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3bdf9361-19a2-4c6d-a909-6c50d53e5d76-dns-svc\") pod \"3bdf9361-19a2-4c6d-a909-6c50d53e5d76\" (UID: \"3bdf9361-19a2-4c6d-a909-6c50d53e5d76\") " Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.469825 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bdf9361-19a2-4c6d-a909-6c50d53e5d76-config\") pod \"3bdf9361-19a2-4c6d-a909-6c50d53e5d76\" (UID: \"3bdf9361-19a2-4c6d-a909-6c50d53e5d76\") " Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.469875 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3bdf9361-19a2-4c6d-a909-6c50d53e5d76-dns-swift-storage-0\") pod \"3bdf9361-19a2-4c6d-a909-6c50d53e5d76\" (UID: \"3bdf9361-19a2-4c6d-a909-6c50d53e5d76\") " Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.521588 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bdf9361-19a2-4c6d-a909-6c50d53e5d76-kube-api-access-5jv82" (OuterVolumeSpecName: "kube-api-access-5jv82") pod "3bdf9361-19a2-4c6d-a909-6c50d53e5d76" (UID: "3bdf9361-19a2-4c6d-a909-6c50d53e5d76"). InnerVolumeSpecName "kube-api-access-5jv82". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.547206 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bdf9361-19a2-4c6d-a909-6c50d53e5d76-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3bdf9361-19a2-4c6d-a909-6c50d53e5d76" (UID: "3bdf9361-19a2-4c6d-a909-6c50d53e5d76"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.547425 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bdf9361-19a2-4c6d-a909-6c50d53e5d76-config" (OuterVolumeSpecName: "config") pod "3bdf9361-19a2-4c6d-a909-6c50d53e5d76" (UID: "3bdf9361-19a2-4c6d-a909-6c50d53e5d76"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.548068 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bdf9361-19a2-4c6d-a909-6c50d53e5d76-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3bdf9361-19a2-4c6d-a909-6c50d53e5d76" (UID: "3bdf9361-19a2-4c6d-a909-6c50d53e5d76"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.557956 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bdf9361-19a2-4c6d-a909-6c50d53e5d76-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3bdf9361-19a2-4c6d-a909-6c50d53e5d76" (UID: "3bdf9361-19a2-4c6d-a909-6c50d53e5d76"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.566408 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bdf9361-19a2-4c6d-a909-6c50d53e5d76-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3bdf9361-19a2-4c6d-a909-6c50d53e5d76" (UID: "3bdf9361-19a2-4c6d-a909-6c50d53e5d76"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.573387 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3bdf9361-19a2-4c6d-a909-6c50d53e5d76-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.573442 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jv82\" (UniqueName: \"kubernetes.io/projected/3bdf9361-19a2-4c6d-a909-6c50d53e5d76-kube-api-access-5jv82\") on node \"crc\" DevicePath \"\"" Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.573461 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3bdf9361-19a2-4c6d-a909-6c50d53e5d76-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.573474 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3bdf9361-19a2-4c6d-a909-6c50d53e5d76-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.573490 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bdf9361-19a2-4c6d-a909-6c50d53e5d76-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.573504 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3bdf9361-19a2-4c6d-a909-6c50d53e5d76-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.879411 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-96qx9"] Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.899474 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-96qx9" event={"ID":"435e285f-7731-45f3-8c96-282da49d50bf","Type":"ContainerStarted","Data":"b0326b3d9efe1fded563ed10155342d1554fa8bd74b28a83286e1cc57e78b2a1"} Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.901721 4858 generic.go:334] "Generic (PLEG): container finished" podID="3bdf9361-19a2-4c6d-a909-6c50d53e5d76" containerID="749dc2e33b3990e97f669fcd14a8bec84f89aa688a0fc50736070f6756f2197f" exitCode=0 Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.901759 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-8hchg" event={"ID":"3bdf9361-19a2-4c6d-a909-6c50d53e5d76","Type":"ContainerDied","Data":"749dc2e33b3990e97f669fcd14a8bec84f89aa688a0fc50736070f6756f2197f"} Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.901782 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-8hchg" event={"ID":"3bdf9361-19a2-4c6d-a909-6c50d53e5d76","Type":"ContainerDied","Data":"7cac1979ccec50fbc838e07955e7265523313066458cdede9929b55c3042dc20"} Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.901805 4858 scope.go:117] "RemoveContainer" containerID="749dc2e33b3990e97f669fcd14a8bec84f89aa688a0fc50736070f6756f2197f" Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.902229 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-8hchg" Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.935332 4858 scope.go:117] "RemoveContainer" containerID="d73141e4e73332b85d5071e4381c5ee333f917aa8e1d1560143a8a8ac886cf62" Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.961876 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-8hchg"] Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.972476 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-8hchg"] Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.973216 4858 scope.go:117] "RemoveContainer" containerID="749dc2e33b3990e97f669fcd14a8bec84f89aa688a0fc50736070f6756f2197f" Feb 02 17:35:13 crc kubenswrapper[4858]: E0202 17:35:13.973770 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"749dc2e33b3990e97f669fcd14a8bec84f89aa688a0fc50736070f6756f2197f\": container with ID starting with 749dc2e33b3990e97f669fcd14a8bec84f89aa688a0fc50736070f6756f2197f not found: ID does not exist" containerID="749dc2e33b3990e97f669fcd14a8bec84f89aa688a0fc50736070f6756f2197f" Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.973817 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"749dc2e33b3990e97f669fcd14a8bec84f89aa688a0fc50736070f6756f2197f"} err="failed to get container status \"749dc2e33b3990e97f669fcd14a8bec84f89aa688a0fc50736070f6756f2197f\": rpc error: code = NotFound desc = could not find container \"749dc2e33b3990e97f669fcd14a8bec84f89aa688a0fc50736070f6756f2197f\": container with ID starting with 749dc2e33b3990e97f669fcd14a8bec84f89aa688a0fc50736070f6756f2197f not found: ID does not exist" Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.973849 4858 scope.go:117] "RemoveContainer" containerID="d73141e4e73332b85d5071e4381c5ee333f917aa8e1d1560143a8a8ac886cf62" Feb 02 17:35:13 crc kubenswrapper[4858]: E0202 17:35:13.974384 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d73141e4e73332b85d5071e4381c5ee333f917aa8e1d1560143a8a8ac886cf62\": container with ID starting with d73141e4e73332b85d5071e4381c5ee333f917aa8e1d1560143a8a8ac886cf62 not found: ID does not exist" containerID="d73141e4e73332b85d5071e4381c5ee333f917aa8e1d1560143a8a8ac886cf62" Feb 02 17:35:13 crc kubenswrapper[4858]: I0202 17:35:13.974416 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d73141e4e73332b85d5071e4381c5ee333f917aa8e1d1560143a8a8ac886cf62"} err="failed to get container status \"d73141e4e73332b85d5071e4381c5ee333f917aa8e1d1560143a8a8ac886cf62\": rpc error: code = NotFound desc = could not find container \"d73141e4e73332b85d5071e4381c5ee333f917aa8e1d1560143a8a8ac886cf62\": container with ID starting with d73141e4e73332b85d5071e4381c5ee333f917aa8e1d1560143a8a8ac886cf62 not found: ID does not exist" Feb 02 17:35:14 crc kubenswrapper[4858]: I0202 17:35:14.412813 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bdf9361-19a2-4c6d-a909-6c50d53e5d76" path="/var/lib/kubelet/pods/3bdf9361-19a2-4c6d-a909-6c50d53e5d76/volumes" Feb 02 17:35:14 crc kubenswrapper[4858]: I0202 17:35:14.913022 4858 generic.go:334] "Generic (PLEG): container finished" podID="435e285f-7731-45f3-8c96-282da49d50bf" containerID="be7ff32f5596e3ca299a637c86e585ff23f1b0ca6ceb7ee6ecae46a066cea987" exitCode=0 Feb 02 17:35:14 crc kubenswrapper[4858]: I0202 17:35:14.913073 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-96qx9" event={"ID":"435e285f-7731-45f3-8c96-282da49d50bf","Type":"ContainerDied","Data":"be7ff32f5596e3ca299a637c86e585ff23f1b0ca6ceb7ee6ecae46a066cea987"} Feb 02 17:35:15 crc kubenswrapper[4858]: I0202 17:35:15.926989 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-96qx9" event={"ID":"435e285f-7731-45f3-8c96-282da49d50bf","Type":"ContainerStarted","Data":"14b64a755547913305d2c2ad875df7d3db23bf599eb19aa5d1dd9e14372791b0"} Feb 02 17:35:15 crc kubenswrapper[4858]: I0202 17:35:15.927929 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55478c4467-96qx9" Feb 02 17:35:15 crc kubenswrapper[4858]: I0202 17:35:15.955077 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55478c4467-96qx9" podStartSLOduration=3.955043572 podStartE2EDuration="3.955043572s" podCreationTimestamp="2026-02-02 17:35:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:35:15.948165277 +0000 UTC m=+1217.100580562" watchObservedRunningTime="2026-02-02 17:35:15.955043572 +0000 UTC m=+1217.107458837" Feb 02 17:35:23 crc kubenswrapper[4858]: I0202 17:35:23.353225 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55478c4467-96qx9" Feb 02 17:35:23 crc kubenswrapper[4858]: I0202 17:35:23.440774 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-bsc4m"] Feb 02 17:35:23 crc kubenswrapper[4858]: I0202 17:35:23.444787 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bd4cc8c9-bsc4m" podUID="f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5" containerName="dnsmasq-dns" containerID="cri-o://aacc101b6f5a142c70991e47b0f2c804b97f4f2a7817eb7a5ee2f4ea3a324c8f" gracePeriod=10 Feb 02 17:35:24 crc kubenswrapper[4858]: I0202 17:35:24.003341 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-bsc4m" Feb 02 17:35:24 crc kubenswrapper[4858]: I0202 17:35:24.019494 4858 generic.go:334] "Generic (PLEG): container finished" podID="f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5" containerID="aacc101b6f5a142c70991e47b0f2c804b97f4f2a7817eb7a5ee2f4ea3a324c8f" exitCode=0 Feb 02 17:35:24 crc kubenswrapper[4858]: I0202 17:35:24.019533 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-bsc4m" Feb 02 17:35:24 crc kubenswrapper[4858]: I0202 17:35:24.019535 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-bsc4m" event={"ID":"f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5","Type":"ContainerDied","Data":"aacc101b6f5a142c70991e47b0f2c804b97f4f2a7817eb7a5ee2f4ea3a324c8f"} Feb 02 17:35:24 crc kubenswrapper[4858]: I0202 17:35:24.019627 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-bsc4m" event={"ID":"f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5","Type":"ContainerDied","Data":"697032694b66289d520086fee6a296361847d85c41c0fde603efda137c9d994a"} Feb 02 17:35:24 crc kubenswrapper[4858]: I0202 17:35:24.019643 4858 scope.go:117] "RemoveContainer" containerID="aacc101b6f5a142c70991e47b0f2c804b97f4f2a7817eb7a5ee2f4ea3a324c8f" Feb 02 17:35:24 crc kubenswrapper[4858]: I0202 17:35:24.076629 4858 scope.go:117] "RemoveContainer" containerID="0b35a3c551ba065c301ff474c523aadc072e944f92defe607e0fe3ed45cdcd6e" Feb 02 17:35:24 crc kubenswrapper[4858]: I0202 17:35:24.093741 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5-dns-svc\") pod \"f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5\" (UID: \"f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5\") " Feb 02 17:35:24 crc kubenswrapper[4858]: I0202 17:35:24.093807 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5-ovsdbserver-nb\") pod \"f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5\" (UID: \"f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5\") " Feb 02 17:35:24 crc kubenswrapper[4858]: I0202 17:35:24.093893 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjzkt\" (UniqueName: \"kubernetes.io/projected/f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5-kube-api-access-vjzkt\") pod \"f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5\" (UID: \"f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5\") " Feb 02 17:35:24 crc kubenswrapper[4858]: I0202 17:35:24.093951 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5-dns-swift-storage-0\") pod \"f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5\" (UID: \"f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5\") " Feb 02 17:35:24 crc kubenswrapper[4858]: I0202 17:35:24.093999 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5-config\") pod \"f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5\" (UID: \"f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5\") " Feb 02 17:35:24 crc kubenswrapper[4858]: I0202 17:35:24.094128 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5-openstack-edpm-ipam\") pod \"f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5\" (UID: \"f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5\") " Feb 02 17:35:24 crc kubenswrapper[4858]: I0202 17:35:24.094162 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5-ovsdbserver-sb\") pod \"f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5\" (UID: \"f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5\") " Feb 02 17:35:24 crc kubenswrapper[4858]: I0202 17:35:24.101792 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5-kube-api-access-vjzkt" (OuterVolumeSpecName: "kube-api-access-vjzkt") pod "f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5" (UID: "f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5"). InnerVolumeSpecName "kube-api-access-vjzkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:35:24 crc kubenswrapper[4858]: I0202 17:35:24.102023 4858 scope.go:117] "RemoveContainer" containerID="aacc101b6f5a142c70991e47b0f2c804b97f4f2a7817eb7a5ee2f4ea3a324c8f" Feb 02 17:35:24 crc kubenswrapper[4858]: E0202 17:35:24.102591 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aacc101b6f5a142c70991e47b0f2c804b97f4f2a7817eb7a5ee2f4ea3a324c8f\": container with ID starting with aacc101b6f5a142c70991e47b0f2c804b97f4f2a7817eb7a5ee2f4ea3a324c8f not found: ID does not exist" containerID="aacc101b6f5a142c70991e47b0f2c804b97f4f2a7817eb7a5ee2f4ea3a324c8f" Feb 02 17:35:24 crc kubenswrapper[4858]: I0202 17:35:24.102705 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aacc101b6f5a142c70991e47b0f2c804b97f4f2a7817eb7a5ee2f4ea3a324c8f"} err="failed to get container status \"aacc101b6f5a142c70991e47b0f2c804b97f4f2a7817eb7a5ee2f4ea3a324c8f\": rpc error: code = NotFound desc = could not find container \"aacc101b6f5a142c70991e47b0f2c804b97f4f2a7817eb7a5ee2f4ea3a324c8f\": container with ID starting with aacc101b6f5a142c70991e47b0f2c804b97f4f2a7817eb7a5ee2f4ea3a324c8f not found: ID does not exist" Feb 02 17:35:24 crc kubenswrapper[4858]: I0202 17:35:24.102810 4858 scope.go:117] "RemoveContainer" containerID="0b35a3c551ba065c301ff474c523aadc072e944f92defe607e0fe3ed45cdcd6e" Feb 02 17:35:24 crc kubenswrapper[4858]: E0202 17:35:24.103170 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b35a3c551ba065c301ff474c523aadc072e944f92defe607e0fe3ed45cdcd6e\": container with ID starting with 0b35a3c551ba065c301ff474c523aadc072e944f92defe607e0fe3ed45cdcd6e not found: ID does not exist" containerID="0b35a3c551ba065c301ff474c523aadc072e944f92defe607e0fe3ed45cdcd6e" Feb 02 17:35:24 crc kubenswrapper[4858]: I0202 17:35:24.103267 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b35a3c551ba065c301ff474c523aadc072e944f92defe607e0fe3ed45cdcd6e"} err="failed to get container status \"0b35a3c551ba065c301ff474c523aadc072e944f92defe607e0fe3ed45cdcd6e\": rpc error: code = NotFound desc = could not find container \"0b35a3c551ba065c301ff474c523aadc072e944f92defe607e0fe3ed45cdcd6e\": container with ID starting with 0b35a3c551ba065c301ff474c523aadc072e944f92defe607e0fe3ed45cdcd6e not found: ID does not exist" Feb 02 17:35:24 crc kubenswrapper[4858]: I0202 17:35:24.148633 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5" (UID: "f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:35:24 crc kubenswrapper[4858]: I0202 17:35:24.148895 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5-config" (OuterVolumeSpecName: "config") pod "f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5" (UID: "f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:35:24 crc kubenswrapper[4858]: I0202 17:35:24.150195 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5" (UID: "f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:35:24 crc kubenswrapper[4858]: I0202 17:35:24.150192 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5" (UID: "f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:35:24 crc kubenswrapper[4858]: I0202 17:35:24.154547 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5" (UID: "f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:35:24 crc kubenswrapper[4858]: I0202 17:35:24.156328 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5" (UID: "f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:35:24 crc kubenswrapper[4858]: I0202 17:35:24.196494 4858 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 17:35:24 crc kubenswrapper[4858]: I0202 17:35:24.196857 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 17:35:24 crc kubenswrapper[4858]: I0202 17:35:24.196874 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 17:35:24 crc kubenswrapper[4858]: I0202 17:35:24.196885 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 17:35:24 crc kubenswrapper[4858]: I0202 17:35:24.196898 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjzkt\" (UniqueName: \"kubernetes.io/projected/f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5-kube-api-access-vjzkt\") on node \"crc\" DevicePath \"\"" Feb 02 17:35:24 crc kubenswrapper[4858]: I0202 17:35:24.196911 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 17:35:24 crc kubenswrapper[4858]: I0202 17:35:24.196923 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5-config\") on node \"crc\" DevicePath \"\"" Feb 02 17:35:24 crc kubenswrapper[4858]: I0202 17:35:24.355038 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-bsc4m"] Feb 02 17:35:24 crc kubenswrapper[4858]: I0202 17:35:24.363647 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-bsc4m"] Feb 02 17:35:24 crc kubenswrapper[4858]: I0202 17:35:24.409687 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5" path="/var/lib/kubelet/pods/f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5/volumes" Feb 02 17:35:27 crc kubenswrapper[4858]: I0202 17:35:27.807960 4858 patch_prober.go:28] interesting pod/machine-config-daemon-lbvl2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 17:35:27 crc kubenswrapper[4858]: I0202 17:35:27.808746 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 17:35:36 crc kubenswrapper[4858]: I0202 17:35:36.655002 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6hs2p"] Feb 02 17:35:36 crc kubenswrapper[4858]: E0202 17:35:36.655809 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5" containerName="init" Feb 02 17:35:36 crc kubenswrapper[4858]: I0202 17:35:36.655820 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5" containerName="init" Feb 02 17:35:36 crc kubenswrapper[4858]: E0202 17:35:36.655840 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5" containerName="dnsmasq-dns" Feb 02 17:35:36 crc kubenswrapper[4858]: I0202 17:35:36.655846 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5" containerName="dnsmasq-dns" Feb 02 17:35:36 crc kubenswrapper[4858]: E0202 17:35:36.655854 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bdf9361-19a2-4c6d-a909-6c50d53e5d76" containerName="dnsmasq-dns" Feb 02 17:35:36 crc kubenswrapper[4858]: I0202 17:35:36.655860 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bdf9361-19a2-4c6d-a909-6c50d53e5d76" containerName="dnsmasq-dns" Feb 02 17:35:36 crc kubenswrapper[4858]: E0202 17:35:36.655874 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bdf9361-19a2-4c6d-a909-6c50d53e5d76" containerName="init" Feb 02 17:35:36 crc kubenswrapper[4858]: I0202 17:35:36.655880 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bdf9361-19a2-4c6d-a909-6c50d53e5d76" containerName="init" Feb 02 17:35:36 crc kubenswrapper[4858]: I0202 17:35:36.656075 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bdf9361-19a2-4c6d-a909-6c50d53e5d76" containerName="dnsmasq-dns" Feb 02 17:35:36 crc kubenswrapper[4858]: I0202 17:35:36.656092 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0c2e3fc-0211-4ef0-8175-05e6a8ccd8d5" containerName="dnsmasq-dns" Feb 02 17:35:36 crc kubenswrapper[4858]: I0202 17:35:36.656664 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6hs2p" Feb 02 17:35:36 crc kubenswrapper[4858]: I0202 17:35:36.659131 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 17:35:36 crc kubenswrapper[4858]: I0202 17:35:36.659305 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q7l94" Feb 02 17:35:36 crc kubenswrapper[4858]: I0202 17:35:36.660115 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 17:35:36 crc kubenswrapper[4858]: I0202 17:35:36.664600 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 17:35:36 crc kubenswrapper[4858]: I0202 17:35:36.689691 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6hs2p"] Feb 02 17:35:36 crc kubenswrapper[4858]: I0202 17:35:36.762268 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6hs2p\" (UID: \"6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6hs2p" Feb 02 17:35:36 crc kubenswrapper[4858]: I0202 17:35:36.762667 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6hs2p\" (UID: \"6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6hs2p" Feb 02 17:35:36 crc kubenswrapper[4858]: I0202 17:35:36.763073 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vpnz\" (UniqueName: \"kubernetes.io/projected/6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b-kube-api-access-4vpnz\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6hs2p\" (UID: \"6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6hs2p" Feb 02 17:35:36 crc kubenswrapper[4858]: I0202 17:35:36.763366 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6hs2p\" (UID: \"6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6hs2p" Feb 02 17:35:36 crc kubenswrapper[4858]: I0202 17:35:36.865432 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6hs2p\" (UID: \"6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6hs2p" Feb 02 17:35:36 crc kubenswrapper[4858]: I0202 17:35:36.865560 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6hs2p\" (UID: \"6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6hs2p" Feb 02 17:35:36 crc kubenswrapper[4858]: I0202 17:35:36.865650 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6hs2p\" (UID: \"6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6hs2p" Feb 02 17:35:36 crc kubenswrapper[4858]: I0202 17:35:36.865768 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vpnz\" (UniqueName: \"kubernetes.io/projected/6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b-kube-api-access-4vpnz\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6hs2p\" (UID: \"6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6hs2p" Feb 02 17:35:36 crc kubenswrapper[4858]: I0202 17:35:36.871548 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6hs2p\" (UID: \"6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6hs2p" Feb 02 17:35:36 crc kubenswrapper[4858]: I0202 17:35:36.871560 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6hs2p\" (UID: \"6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6hs2p" Feb 02 17:35:36 crc kubenswrapper[4858]: I0202 17:35:36.877599 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6hs2p\" (UID: \"6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6hs2p" Feb 02 17:35:36 crc kubenswrapper[4858]: I0202 17:35:36.889165 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vpnz\" (UniqueName: \"kubernetes.io/projected/6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b-kube-api-access-4vpnz\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6hs2p\" (UID: \"6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6hs2p" Feb 02 17:35:36 crc kubenswrapper[4858]: I0202 17:35:36.973744 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6hs2p" Feb 02 17:35:37 crc kubenswrapper[4858]: I0202 17:35:37.192304 4858 generic.go:334] "Generic (PLEG): container finished" podID="f470a8b9-224f-436f-bbbb-c6ab6b1f587e" containerID="a0ac70e31fd4e25b878af91d28365083cf12abad375da0b3eb7f845eb952c292" exitCode=0 Feb 02 17:35:37 crc kubenswrapper[4858]: I0202 17:35:37.192496 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f470a8b9-224f-436f-bbbb-c6ab6b1f587e","Type":"ContainerDied","Data":"a0ac70e31fd4e25b878af91d28365083cf12abad375da0b3eb7f845eb952c292"} Feb 02 17:35:38 crc kubenswrapper[4858]: I0202 17:35:37.567218 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6hs2p"] Feb 02 17:35:38 crc kubenswrapper[4858]: W0202 17:35:37.568691 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ff7fb8a_e464_4395_a2a3_60ed7a06ba5b.slice/crio-2414d477dd17c42d7d8990d48c75056d9e75f172fd8b415c2d7436031d29d4e1 WatchSource:0}: Error finding container 2414d477dd17c42d7d8990d48c75056d9e75f172fd8b415c2d7436031d29d4e1: Status 404 returned error can't find the container with id 2414d477dd17c42d7d8990d48c75056d9e75f172fd8b415c2d7436031d29d4e1 Feb 02 17:35:38 crc kubenswrapper[4858]: I0202 17:35:37.571253 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 17:35:38 crc kubenswrapper[4858]: I0202 17:35:38.206065 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f470a8b9-224f-436f-bbbb-c6ab6b1f587e","Type":"ContainerStarted","Data":"2855ab46d56923627857b4e3905a054b680c1b7124d360b9220a3f8eb489a1c8"} Feb 02 17:35:38 crc kubenswrapper[4858]: I0202 17:35:38.206513 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 02 17:35:38 crc kubenswrapper[4858]: I0202 17:35:38.208429 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6hs2p" event={"ID":"6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b","Type":"ContainerStarted","Data":"2414d477dd17c42d7d8990d48c75056d9e75f172fd8b415c2d7436031d29d4e1"} Feb 02 17:35:38 crc kubenswrapper[4858]: I0202 17:35:38.210729 4858 generic.go:334] "Generic (PLEG): container finished" podID="09b56fe4-3166-4448-a186-95f3c74199f1" containerID="87289e7bf30264ae635291899e6b3fe5508f017fc0f128c5ec99667c272fbfef" exitCode=0 Feb 02 17:35:38 crc kubenswrapper[4858]: I0202 17:35:38.210787 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"09b56fe4-3166-4448-a186-95f3c74199f1","Type":"ContainerDied","Data":"87289e7bf30264ae635291899e6b3fe5508f017fc0f128c5ec99667c272fbfef"} Feb 02 17:35:38 crc kubenswrapper[4858]: I0202 17:35:38.245791 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.245775942 podStartE2EDuration="37.245775942s" podCreationTimestamp="2026-02-02 17:35:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:35:38.241335546 +0000 UTC m=+1239.393750811" watchObservedRunningTime="2026-02-02 17:35:38.245775942 +0000 UTC m=+1239.398191207" Feb 02 17:35:39 crc kubenswrapper[4858]: I0202 17:35:39.225404 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"09b56fe4-3166-4448-a186-95f3c74199f1","Type":"ContainerStarted","Data":"6992bd480efc326266c3e879c63fd8ee80b68470293619341732c0115cac6823"} Feb 02 17:35:39 crc kubenswrapper[4858]: I0202 17:35:39.225753 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:35:39 crc kubenswrapper[4858]: I0202 17:35:39.254198 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.254172861 podStartE2EDuration="37.254172861s" podCreationTimestamp="2026-02-02 17:35:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:35:39.244391464 +0000 UTC m=+1240.396806729" watchObservedRunningTime="2026-02-02 17:35:39.254172861 +0000 UTC m=+1240.406588126" Feb 02 17:35:47 crc kubenswrapper[4858]: I0202 17:35:47.309401 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6hs2p" event={"ID":"6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b","Type":"ContainerStarted","Data":"27ef8b149e23cbaa968e5e51cbb748bfca34c632c829625890248a8cb65e782e"} Feb 02 17:35:47 crc kubenswrapper[4858]: I0202 17:35:47.336734 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6hs2p" podStartSLOduration=2.505434197 podStartE2EDuration="11.33670115s" podCreationTimestamp="2026-02-02 17:35:36 +0000 UTC" firstStartedPulling="2026-02-02 17:35:37.570987978 +0000 UTC m=+1238.723403263" lastFinishedPulling="2026-02-02 17:35:46.402254951 +0000 UTC m=+1247.554670216" observedRunningTime="2026-02-02 17:35:47.328414685 +0000 UTC m=+1248.480829970" watchObservedRunningTime="2026-02-02 17:35:47.33670115 +0000 UTC m=+1248.489116425" Feb 02 17:35:52 crc kubenswrapper[4858]: I0202 17:35:52.278368 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 02 17:35:53 crc kubenswrapper[4858]: I0202 17:35:53.242153 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 02 17:35:57 crc kubenswrapper[4858]: I0202 17:35:57.441884 4858 generic.go:334] "Generic (PLEG): container finished" podID="6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b" containerID="27ef8b149e23cbaa968e5e51cbb748bfca34c632c829625890248a8cb65e782e" exitCode=0 Feb 02 17:35:57 crc kubenswrapper[4858]: I0202 17:35:57.441950 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6hs2p" event={"ID":"6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b","Type":"ContainerDied","Data":"27ef8b149e23cbaa968e5e51cbb748bfca34c632c829625890248a8cb65e782e"} Feb 02 17:35:57 crc kubenswrapper[4858]: I0202 17:35:57.808182 4858 patch_prober.go:28] interesting pod/machine-config-daemon-lbvl2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 17:35:57 crc kubenswrapper[4858]: I0202 17:35:57.808262 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 17:35:57 crc kubenswrapper[4858]: I0202 17:35:57.808316 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" Feb 02 17:35:57 crc kubenswrapper[4858]: I0202 17:35:57.809309 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"29f5b545eb82d931c7c8ceb6afb897d3a7adcbb180bbad52cb5301078f6256a8"} pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 17:35:57 crc kubenswrapper[4858]: I0202 17:35:57.809409 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" containerID="cri-o://29f5b545eb82d931c7c8ceb6afb897d3a7adcbb180bbad52cb5301078f6256a8" gracePeriod=600 Feb 02 17:35:58 crc kubenswrapper[4858]: I0202 17:35:58.456248 4858 generic.go:334] "Generic (PLEG): container finished" podID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerID="29f5b545eb82d931c7c8ceb6afb897d3a7adcbb180bbad52cb5301078f6256a8" exitCode=0 Feb 02 17:35:58 crc kubenswrapper[4858]: I0202 17:35:58.456322 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" event={"ID":"d03a4872-ca6a-4233-bdbf-b31f7890dc3e","Type":"ContainerDied","Data":"29f5b545eb82d931c7c8ceb6afb897d3a7adcbb180bbad52cb5301078f6256a8"} Feb 02 17:35:58 crc kubenswrapper[4858]: I0202 17:35:58.456610 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" event={"ID":"d03a4872-ca6a-4233-bdbf-b31f7890dc3e","Type":"ContainerStarted","Data":"a72185f737bdc7412fd3cbd9bb1c4c3b6a039e9c69f0ba35d250e546f8d8b6ce"} Feb 02 17:35:58 crc kubenswrapper[4858]: I0202 17:35:58.456634 4858 scope.go:117] "RemoveContainer" containerID="b38cf52a6ef125bca2bfc0fb953106251c191f34dbe401ffe7c0fa9cbe521a8f" Feb 02 17:35:58 crc kubenswrapper[4858]: I0202 17:35:58.842879 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6hs2p" Feb 02 17:35:59 crc kubenswrapper[4858]: I0202 17:35:59.022413 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b-inventory\") pod \"6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b\" (UID: \"6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b\") " Feb 02 17:35:59 crc kubenswrapper[4858]: I0202 17:35:59.022735 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b-ssh-key-openstack-edpm-ipam\") pod \"6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b\" (UID: \"6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b\") " Feb 02 17:35:59 crc kubenswrapper[4858]: I0202 17:35:59.022866 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vpnz\" (UniqueName: \"kubernetes.io/projected/6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b-kube-api-access-4vpnz\") pod \"6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b\" (UID: \"6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b\") " Feb 02 17:35:59 crc kubenswrapper[4858]: I0202 17:35:59.023035 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b-repo-setup-combined-ca-bundle\") pod \"6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b\" (UID: \"6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b\") " Feb 02 17:35:59 crc kubenswrapper[4858]: I0202 17:35:59.029062 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b-kube-api-access-4vpnz" (OuterVolumeSpecName: "kube-api-access-4vpnz") pod "6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b" (UID: "6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b"). InnerVolumeSpecName "kube-api-access-4vpnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:35:59 crc kubenswrapper[4858]: I0202 17:35:59.036716 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b" (UID: "6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:35:59 crc kubenswrapper[4858]: I0202 17:35:59.058359 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b-inventory" (OuterVolumeSpecName: "inventory") pod "6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b" (UID: "6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:35:59 crc kubenswrapper[4858]: I0202 17:35:59.081091 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b" (UID: "6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:35:59 crc kubenswrapper[4858]: I0202 17:35:59.126264 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 17:35:59 crc kubenswrapper[4858]: I0202 17:35:59.126305 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vpnz\" (UniqueName: \"kubernetes.io/projected/6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b-kube-api-access-4vpnz\") on node \"crc\" DevicePath \"\"" Feb 02 17:35:59 crc kubenswrapper[4858]: I0202 17:35:59.126318 4858 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:35:59 crc kubenswrapper[4858]: I0202 17:35:59.126333 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 17:35:59 crc kubenswrapper[4858]: I0202 17:35:59.474038 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6hs2p" event={"ID":"6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b","Type":"ContainerDied","Data":"2414d477dd17c42d7d8990d48c75056d9e75f172fd8b415c2d7436031d29d4e1"} Feb 02 17:35:59 crc kubenswrapper[4858]: I0202 17:35:59.475932 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2414d477dd17c42d7d8990d48c75056d9e75f172fd8b415c2d7436031d29d4e1" Feb 02 17:35:59 crc kubenswrapper[4858]: I0202 17:35:59.474059 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6hs2p" Feb 02 17:35:59 crc kubenswrapper[4858]: I0202 17:35:59.587781 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7vtt"] Feb 02 17:35:59 crc kubenswrapper[4858]: E0202 17:35:59.588291 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 02 17:35:59 crc kubenswrapper[4858]: I0202 17:35:59.588313 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 02 17:35:59 crc kubenswrapper[4858]: I0202 17:35:59.588538 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 02 17:35:59 crc kubenswrapper[4858]: I0202 17:35:59.589346 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7vtt" Feb 02 17:35:59 crc kubenswrapper[4858]: I0202 17:35:59.591537 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q7l94" Feb 02 17:35:59 crc kubenswrapper[4858]: I0202 17:35:59.592093 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 17:35:59 crc kubenswrapper[4858]: I0202 17:35:59.593343 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 17:35:59 crc kubenswrapper[4858]: I0202 17:35:59.593372 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 17:35:59 crc kubenswrapper[4858]: I0202 17:35:59.597925 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7vtt"] Feb 02 17:35:59 crc kubenswrapper[4858]: I0202 17:35:59.739174 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h7vtt\" (UID: \"ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7vtt" Feb 02 17:35:59 crc kubenswrapper[4858]: I0202 17:35:59.739279 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66wfx\" (UniqueName: \"kubernetes.io/projected/ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c-kube-api-access-66wfx\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h7vtt\" (UID: \"ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7vtt" Feb 02 17:35:59 crc kubenswrapper[4858]: I0202 17:35:59.739423 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h7vtt\" (UID: \"ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7vtt" Feb 02 17:35:59 crc kubenswrapper[4858]: I0202 17:35:59.841184 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h7vtt\" (UID: \"ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7vtt" Feb 02 17:35:59 crc kubenswrapper[4858]: I0202 17:35:59.841341 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h7vtt\" (UID: \"ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7vtt" Feb 02 17:35:59 crc kubenswrapper[4858]: I0202 17:35:59.841376 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66wfx\" (UniqueName: \"kubernetes.io/projected/ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c-kube-api-access-66wfx\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h7vtt\" (UID: \"ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7vtt" Feb 02 17:35:59 crc kubenswrapper[4858]: I0202 17:35:59.847390 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h7vtt\" (UID: \"ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7vtt" Feb 02 17:35:59 crc kubenswrapper[4858]: I0202 17:35:59.850597 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h7vtt\" (UID: \"ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7vtt" Feb 02 17:35:59 crc kubenswrapper[4858]: I0202 17:35:59.861173 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66wfx\" (UniqueName: \"kubernetes.io/projected/ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c-kube-api-access-66wfx\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h7vtt\" (UID: \"ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7vtt" Feb 02 17:35:59 crc kubenswrapper[4858]: I0202 17:35:59.912103 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7vtt" Feb 02 17:36:00 crc kubenswrapper[4858]: I0202 17:36:00.446874 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7vtt"] Feb 02 17:36:00 crc kubenswrapper[4858]: I0202 17:36:00.514063 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7vtt" event={"ID":"ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c","Type":"ContainerStarted","Data":"8ee6a21d6804f4be1d6bd28db7600483a80ee2807dab9dca2be47fd8fb0de841"} Feb 02 17:36:00 crc kubenswrapper[4858]: I0202 17:36:00.933413 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 17:36:01 crc kubenswrapper[4858]: I0202 17:36:01.522321 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7vtt" event={"ID":"ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c","Type":"ContainerStarted","Data":"85b0c943f4a72b5d8131cb3bc92da24ec18115ed38a999fd681837ec9de76564"} Feb 02 17:36:01 crc kubenswrapper[4858]: I0202 17:36:01.545483 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7vtt" podStartSLOduration=2.071503726 podStartE2EDuration="2.545458567s" podCreationTimestamp="2026-02-02 17:35:59 +0000 UTC" firstStartedPulling="2026-02-02 17:36:00.456125489 +0000 UTC m=+1261.608540774" lastFinishedPulling="2026-02-02 17:36:00.93008035 +0000 UTC m=+1262.082495615" observedRunningTime="2026-02-02 17:36:01.538263163 +0000 UTC m=+1262.690678428" watchObservedRunningTime="2026-02-02 17:36:01.545458567 +0000 UTC m=+1262.697873842" Feb 02 17:36:04 crc kubenswrapper[4858]: I0202 17:36:04.560800 4858 generic.go:334] "Generic (PLEG): container finished" podID="ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c" containerID="85b0c943f4a72b5d8131cb3bc92da24ec18115ed38a999fd681837ec9de76564" exitCode=0 Feb 02 17:36:04 crc kubenswrapper[4858]: I0202 17:36:04.560883 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7vtt" event={"ID":"ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c","Type":"ContainerDied","Data":"85b0c943f4a72b5d8131cb3bc92da24ec18115ed38a999fd681837ec9de76564"} Feb 02 17:36:05 crc kubenswrapper[4858]: I0202 17:36:05.933483 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7vtt" Feb 02 17:36:06 crc kubenswrapper[4858]: I0202 17:36:06.073593 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c-inventory\") pod \"ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c\" (UID: \"ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c\") " Feb 02 17:36:06 crc kubenswrapper[4858]: I0202 17:36:06.073816 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c-ssh-key-openstack-edpm-ipam\") pod \"ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c\" (UID: \"ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c\") " Feb 02 17:36:06 crc kubenswrapper[4858]: I0202 17:36:06.073882 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66wfx\" (UniqueName: \"kubernetes.io/projected/ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c-kube-api-access-66wfx\") pod \"ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c\" (UID: \"ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c\") " Feb 02 17:36:06 crc kubenswrapper[4858]: I0202 17:36:06.079082 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c-kube-api-access-66wfx" (OuterVolumeSpecName: "kube-api-access-66wfx") pod "ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c" (UID: "ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c"). InnerVolumeSpecName "kube-api-access-66wfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:36:06 crc kubenswrapper[4858]: I0202 17:36:06.114295 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c" (UID: "ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:36:06 crc kubenswrapper[4858]: I0202 17:36:06.123742 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c-inventory" (OuterVolumeSpecName: "inventory") pod "ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c" (UID: "ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:36:06 crc kubenswrapper[4858]: I0202 17:36:06.177929 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 17:36:06 crc kubenswrapper[4858]: I0202 17:36:06.177961 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 17:36:06 crc kubenswrapper[4858]: I0202 17:36:06.177991 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66wfx\" (UniqueName: \"kubernetes.io/projected/ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c-kube-api-access-66wfx\") on node \"crc\" DevicePath \"\"" Feb 02 17:36:06 crc kubenswrapper[4858]: I0202 17:36:06.586608 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7vtt" event={"ID":"ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c","Type":"ContainerDied","Data":"8ee6a21d6804f4be1d6bd28db7600483a80ee2807dab9dca2be47fd8fb0de841"} Feb 02 17:36:06 crc kubenswrapper[4858]: I0202 17:36:06.586645 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ee6a21d6804f4be1d6bd28db7600483a80ee2807dab9dca2be47fd8fb0de841" Feb 02 17:36:06 crc kubenswrapper[4858]: I0202 17:36:06.586647 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h7vtt" Feb 02 17:36:06 crc kubenswrapper[4858]: I0202 17:36:06.656915 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pr5qm"] Feb 02 17:36:06 crc kubenswrapper[4858]: E0202 17:36:06.657316 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 02 17:36:06 crc kubenswrapper[4858]: I0202 17:36:06.657334 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 02 17:36:06 crc kubenswrapper[4858]: I0202 17:36:06.657504 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 02 17:36:06 crc kubenswrapper[4858]: I0202 17:36:06.658200 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pr5qm" Feb 02 17:36:06 crc kubenswrapper[4858]: I0202 17:36:06.660166 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 17:36:06 crc kubenswrapper[4858]: I0202 17:36:06.660258 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 17:36:06 crc kubenswrapper[4858]: I0202 17:36:06.660335 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q7l94" Feb 02 17:36:06 crc kubenswrapper[4858]: I0202 17:36:06.660626 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 17:36:06 crc kubenswrapper[4858]: I0202 17:36:06.671174 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pr5qm"] Feb 02 17:36:06 crc kubenswrapper[4858]: I0202 17:36:06.687191 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0787e12-6645-4df3-8850-b9698b323f69-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pr5qm\" (UID: \"d0787e12-6645-4df3-8850-b9698b323f69\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pr5qm" Feb 02 17:36:06 crc kubenswrapper[4858]: I0202 17:36:06.687255 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d0787e12-6645-4df3-8850-b9698b323f69-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pr5qm\" (UID: \"d0787e12-6645-4df3-8850-b9698b323f69\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pr5qm" Feb 02 17:36:06 crc kubenswrapper[4858]: I0202 17:36:06.687405 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0787e12-6645-4df3-8850-b9698b323f69-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pr5qm\" (UID: \"d0787e12-6645-4df3-8850-b9698b323f69\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pr5qm" Feb 02 17:36:06 crc kubenswrapper[4858]: I0202 17:36:06.687508 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfzs4\" (UniqueName: \"kubernetes.io/projected/d0787e12-6645-4df3-8850-b9698b323f69-kube-api-access-sfzs4\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pr5qm\" (UID: \"d0787e12-6645-4df3-8850-b9698b323f69\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pr5qm" Feb 02 17:36:06 crc kubenswrapper[4858]: I0202 17:36:06.789582 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0787e12-6645-4df3-8850-b9698b323f69-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pr5qm\" (UID: \"d0787e12-6645-4df3-8850-b9698b323f69\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pr5qm" Feb 02 17:36:06 crc kubenswrapper[4858]: I0202 17:36:06.789658 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d0787e12-6645-4df3-8850-b9698b323f69-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pr5qm\" (UID: \"d0787e12-6645-4df3-8850-b9698b323f69\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pr5qm" Feb 02 17:36:06 crc kubenswrapper[4858]: I0202 17:36:06.789714 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0787e12-6645-4df3-8850-b9698b323f69-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pr5qm\" (UID: \"d0787e12-6645-4df3-8850-b9698b323f69\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pr5qm" Feb 02 17:36:06 crc kubenswrapper[4858]: I0202 17:36:06.789770 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfzs4\" (UniqueName: \"kubernetes.io/projected/d0787e12-6645-4df3-8850-b9698b323f69-kube-api-access-sfzs4\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pr5qm\" (UID: \"d0787e12-6645-4df3-8850-b9698b323f69\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pr5qm" Feb 02 17:36:06 crc kubenswrapper[4858]: I0202 17:36:06.793112 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0787e12-6645-4df3-8850-b9698b323f69-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pr5qm\" (UID: \"d0787e12-6645-4df3-8850-b9698b323f69\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pr5qm" Feb 02 17:36:06 crc kubenswrapper[4858]: I0202 17:36:06.793331 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d0787e12-6645-4df3-8850-b9698b323f69-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pr5qm\" (UID: \"d0787e12-6645-4df3-8850-b9698b323f69\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pr5qm" Feb 02 17:36:06 crc kubenswrapper[4858]: I0202 17:36:06.794786 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0787e12-6645-4df3-8850-b9698b323f69-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pr5qm\" (UID: \"d0787e12-6645-4df3-8850-b9698b323f69\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pr5qm" Feb 02 17:36:06 crc kubenswrapper[4858]: I0202 17:36:06.806816 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfzs4\" (UniqueName: \"kubernetes.io/projected/d0787e12-6645-4df3-8850-b9698b323f69-kube-api-access-sfzs4\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pr5qm\" (UID: \"d0787e12-6645-4df3-8850-b9698b323f69\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pr5qm" Feb 02 17:36:06 crc kubenswrapper[4858]: I0202 17:36:06.985727 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pr5qm" Feb 02 17:36:07 crc kubenswrapper[4858]: I0202 17:36:07.556251 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pr5qm"] Feb 02 17:36:07 crc kubenswrapper[4858]: I0202 17:36:07.598722 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pr5qm" event={"ID":"d0787e12-6645-4df3-8850-b9698b323f69","Type":"ContainerStarted","Data":"5f7f3edd885d8c785d5dc20fcdaee9eb0f813da6c161e32efa1a0b66a5fca229"} Feb 02 17:36:08 crc kubenswrapper[4858]: I0202 17:36:08.613045 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pr5qm" event={"ID":"d0787e12-6645-4df3-8850-b9698b323f69","Type":"ContainerStarted","Data":"75556f73aa2c1fa6a42d8378d5a7506bdf8106b061ec894c9e2af1853efb3790"} Feb 02 17:36:08 crc kubenswrapper[4858]: I0202 17:36:08.641547 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pr5qm" podStartSLOduration=2.200203276 podStartE2EDuration="2.6415294s" podCreationTimestamp="2026-02-02 17:36:06 +0000 UTC" firstStartedPulling="2026-02-02 17:36:07.553314164 +0000 UTC m=+1268.705729429" lastFinishedPulling="2026-02-02 17:36:07.994640268 +0000 UTC m=+1269.147055553" observedRunningTime="2026-02-02 17:36:08.635463588 +0000 UTC m=+1269.787878893" watchObservedRunningTime="2026-02-02 17:36:08.6415294 +0000 UTC m=+1269.793944665" Feb 02 17:37:08 crc kubenswrapper[4858]: I0202 17:37:08.565726 4858 scope.go:117] "RemoveContainer" containerID="26f2c9ac1610ea82ece37363db8e837c1be599e607cc975e2fb1ff8dba6ae0f5" Feb 02 17:37:08 crc kubenswrapper[4858]: I0202 17:37:08.594210 4858 scope.go:117] "RemoveContainer" containerID="74da3e96d0b80e4d6e3fc0c1005cca38cca18cfe3abed397df82c484dfcc7f1f" Feb 02 17:37:08 crc kubenswrapper[4858]: I0202 17:37:08.636363 4858 scope.go:117] "RemoveContainer" containerID="e0d357ff68dfc0c048c53fbd7b229d228bcd0d28988ef48b8396526b0fe205bc" Feb 02 17:37:08 crc kubenswrapper[4858]: I0202 17:37:08.675057 4858 scope.go:117] "RemoveContainer" containerID="362934c06ec15d7fb4f6465121f97953b5f5ab94a1652af9b4f36d7fe3075e5a" Feb 02 17:37:08 crc kubenswrapper[4858]: I0202 17:37:08.706554 4858 scope.go:117] "RemoveContainer" containerID="8a893e64e653d2a20cf68f5b07159010b5cc01b61a63ee2621691d468a67a7fd" Feb 02 17:38:08 crc kubenswrapper[4858]: I0202 17:38:08.847868 4858 scope.go:117] "RemoveContainer" containerID="17c18a68c197dd49f22046cec6a9275bbdace79f384ad7a69c7a7a19904202f4" Feb 02 17:38:27 crc kubenswrapper[4858]: I0202 17:38:27.808237 4858 patch_prober.go:28] interesting pod/machine-config-daemon-lbvl2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 17:38:27 crc kubenswrapper[4858]: I0202 17:38:27.808845 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 17:38:57 crc kubenswrapper[4858]: I0202 17:38:57.808344 4858 patch_prober.go:28] interesting pod/machine-config-daemon-lbvl2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 17:38:57 crc kubenswrapper[4858]: I0202 17:38:57.809303 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 17:39:08 crc kubenswrapper[4858]: I0202 17:39:08.939897 4858 scope.go:117] "RemoveContainer" containerID="2621fba36fd002bf17ceb1117bb5130ee864bf81e83d1b673e3d9285de94d262" Feb 02 17:39:08 crc kubenswrapper[4858]: I0202 17:39:08.980720 4858 scope.go:117] "RemoveContainer" containerID="c7659a70e9fe979cae25906f80227d33ac8724318103af8ec26148cda948910c" Feb 02 17:39:09 crc kubenswrapper[4858]: I0202 17:39:09.009183 4858 scope.go:117] "RemoveContainer" containerID="6026404fd6743e900c54c598ff19cf2f9ce23b0afe99d5c3511ee477f4858c32" Feb 02 17:39:09 crc kubenswrapper[4858]: I0202 17:39:09.041152 4858 scope.go:117] "RemoveContainer" containerID="4459218d70c6944f401c87957b937a33a64faee542c41b3c309b078fca371f6d" Feb 02 17:39:27 crc kubenswrapper[4858]: I0202 17:39:27.807845 4858 patch_prober.go:28] interesting pod/machine-config-daemon-lbvl2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 17:39:27 crc kubenswrapper[4858]: I0202 17:39:27.808557 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 17:39:27 crc kubenswrapper[4858]: I0202 17:39:27.808613 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" Feb 02 17:39:27 crc kubenswrapper[4858]: I0202 17:39:27.809518 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a72185f737bdc7412fd3cbd9bb1c4c3b6a039e9c69f0ba35d250e546f8d8b6ce"} pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 17:39:27 crc kubenswrapper[4858]: I0202 17:39:27.809615 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" containerID="cri-o://a72185f737bdc7412fd3cbd9bb1c4c3b6a039e9c69f0ba35d250e546f8d8b6ce" gracePeriod=600 Feb 02 17:39:28 crc kubenswrapper[4858]: I0202 17:39:28.746258 4858 generic.go:334] "Generic (PLEG): container finished" podID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerID="a72185f737bdc7412fd3cbd9bb1c4c3b6a039e9c69f0ba35d250e546f8d8b6ce" exitCode=0 Feb 02 17:39:28 crc kubenswrapper[4858]: I0202 17:39:28.746357 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" event={"ID":"d03a4872-ca6a-4233-bdbf-b31f7890dc3e","Type":"ContainerDied","Data":"a72185f737bdc7412fd3cbd9bb1c4c3b6a039e9c69f0ba35d250e546f8d8b6ce"} Feb 02 17:39:28 crc kubenswrapper[4858]: I0202 17:39:28.746898 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" event={"ID":"d03a4872-ca6a-4233-bdbf-b31f7890dc3e","Type":"ContainerStarted","Data":"74530cf9b9be874cadd1d92f12bf70a28231d482712b7e73c15b0838788189fc"} Feb 02 17:39:28 crc kubenswrapper[4858]: I0202 17:39:28.746932 4858 scope.go:117] "RemoveContainer" containerID="29f5b545eb82d931c7c8ceb6afb897d3a7adcbb180bbad52cb5301078f6256a8" Feb 02 17:39:39 crc kubenswrapper[4858]: I0202 17:39:39.864164 4858 generic.go:334] "Generic (PLEG): container finished" podID="d0787e12-6645-4df3-8850-b9698b323f69" containerID="75556f73aa2c1fa6a42d8378d5a7506bdf8106b061ec894c9e2af1853efb3790" exitCode=0 Feb 02 17:39:39 crc kubenswrapper[4858]: I0202 17:39:39.864257 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pr5qm" event={"ID":"d0787e12-6645-4df3-8850-b9698b323f69","Type":"ContainerDied","Data":"75556f73aa2c1fa6a42d8378d5a7506bdf8106b061ec894c9e2af1853efb3790"} Feb 02 17:39:41 crc kubenswrapper[4858]: I0202 17:39:41.344598 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pr5qm" Feb 02 17:39:41 crc kubenswrapper[4858]: I0202 17:39:41.442668 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0787e12-6645-4df3-8850-b9698b323f69-inventory\") pod \"d0787e12-6645-4df3-8850-b9698b323f69\" (UID: \"d0787e12-6645-4df3-8850-b9698b323f69\") " Feb 02 17:39:41 crc kubenswrapper[4858]: I0202 17:39:41.442753 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0787e12-6645-4df3-8850-b9698b323f69-bootstrap-combined-ca-bundle\") pod \"d0787e12-6645-4df3-8850-b9698b323f69\" (UID: \"d0787e12-6645-4df3-8850-b9698b323f69\") " Feb 02 17:39:41 crc kubenswrapper[4858]: I0202 17:39:41.442836 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d0787e12-6645-4df3-8850-b9698b323f69-ssh-key-openstack-edpm-ipam\") pod \"d0787e12-6645-4df3-8850-b9698b323f69\" (UID: \"d0787e12-6645-4df3-8850-b9698b323f69\") " Feb 02 17:39:41 crc kubenswrapper[4858]: I0202 17:39:41.442928 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfzs4\" (UniqueName: \"kubernetes.io/projected/d0787e12-6645-4df3-8850-b9698b323f69-kube-api-access-sfzs4\") pod \"d0787e12-6645-4df3-8850-b9698b323f69\" (UID: \"d0787e12-6645-4df3-8850-b9698b323f69\") " Feb 02 17:39:41 crc kubenswrapper[4858]: I0202 17:39:41.448821 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0787e12-6645-4df3-8850-b9698b323f69-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "d0787e12-6645-4df3-8850-b9698b323f69" (UID: "d0787e12-6645-4df3-8850-b9698b323f69"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:39:41 crc kubenswrapper[4858]: I0202 17:39:41.449767 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0787e12-6645-4df3-8850-b9698b323f69-kube-api-access-sfzs4" (OuterVolumeSpecName: "kube-api-access-sfzs4") pod "d0787e12-6645-4df3-8850-b9698b323f69" (UID: "d0787e12-6645-4df3-8850-b9698b323f69"). InnerVolumeSpecName "kube-api-access-sfzs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:39:41 crc kubenswrapper[4858]: I0202 17:39:41.473562 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0787e12-6645-4df3-8850-b9698b323f69-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d0787e12-6645-4df3-8850-b9698b323f69" (UID: "d0787e12-6645-4df3-8850-b9698b323f69"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:39:41 crc kubenswrapper[4858]: I0202 17:39:41.479151 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0787e12-6645-4df3-8850-b9698b323f69-inventory" (OuterVolumeSpecName: "inventory") pod "d0787e12-6645-4df3-8850-b9698b323f69" (UID: "d0787e12-6645-4df3-8850-b9698b323f69"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:39:41 crc kubenswrapper[4858]: I0202 17:39:41.545312 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d0787e12-6645-4df3-8850-b9698b323f69-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 17:39:41 crc kubenswrapper[4858]: I0202 17:39:41.545362 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfzs4\" (UniqueName: \"kubernetes.io/projected/d0787e12-6645-4df3-8850-b9698b323f69-kube-api-access-sfzs4\") on node \"crc\" DevicePath \"\"" Feb 02 17:39:41 crc kubenswrapper[4858]: I0202 17:39:41.545377 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0787e12-6645-4df3-8850-b9698b323f69-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 17:39:41 crc kubenswrapper[4858]: I0202 17:39:41.545390 4858 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0787e12-6645-4df3-8850-b9698b323f69-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:39:41 crc kubenswrapper[4858]: I0202 17:39:41.888681 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pr5qm" event={"ID":"d0787e12-6645-4df3-8850-b9698b323f69","Type":"ContainerDied","Data":"5f7f3edd885d8c785d5dc20fcdaee9eb0f813da6c161e32efa1a0b66a5fca229"} Feb 02 17:39:41 crc kubenswrapper[4858]: I0202 17:39:41.888766 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f7f3edd885d8c785d5dc20fcdaee9eb0f813da6c161e32efa1a0b66a5fca229" Feb 02 17:39:41 crc kubenswrapper[4858]: I0202 17:39:41.888791 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pr5qm" Feb 02 17:39:42 crc kubenswrapper[4858]: I0202 17:39:42.013317 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-27k29"] Feb 02 17:39:42 crc kubenswrapper[4858]: E0202 17:39:42.013812 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0787e12-6645-4df3-8850-b9698b323f69" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 02 17:39:42 crc kubenswrapper[4858]: I0202 17:39:42.013835 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0787e12-6645-4df3-8850-b9698b323f69" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 02 17:39:42 crc kubenswrapper[4858]: I0202 17:39:42.014813 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0787e12-6645-4df3-8850-b9698b323f69" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 02 17:39:42 crc kubenswrapper[4858]: I0202 17:39:42.015619 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-27k29" Feb 02 17:39:42 crc kubenswrapper[4858]: I0202 17:39:42.018208 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q7l94" Feb 02 17:39:42 crc kubenswrapper[4858]: I0202 17:39:42.018469 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 17:39:42 crc kubenswrapper[4858]: I0202 17:39:42.019276 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 17:39:42 crc kubenswrapper[4858]: I0202 17:39:42.019745 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 17:39:42 crc kubenswrapper[4858]: I0202 17:39:42.045121 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-27k29"] Feb 02 17:39:42 crc kubenswrapper[4858]: I0202 17:39:42.159020 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62dj5\" (UniqueName: \"kubernetes.io/projected/b94ab7ee-11a9-42ea-ae40-32926a53ed9a-kube-api-access-62dj5\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-27k29\" (UID: \"b94ab7ee-11a9-42ea-ae40-32926a53ed9a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-27k29" Feb 02 17:39:42 crc kubenswrapper[4858]: I0202 17:39:42.159302 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b94ab7ee-11a9-42ea-ae40-32926a53ed9a-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-27k29\" (UID: \"b94ab7ee-11a9-42ea-ae40-32926a53ed9a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-27k29" Feb 02 17:39:42 crc kubenswrapper[4858]: I0202 17:39:42.159454 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b94ab7ee-11a9-42ea-ae40-32926a53ed9a-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-27k29\" (UID: \"b94ab7ee-11a9-42ea-ae40-32926a53ed9a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-27k29" Feb 02 17:39:42 crc kubenswrapper[4858]: I0202 17:39:42.261125 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62dj5\" (UniqueName: \"kubernetes.io/projected/b94ab7ee-11a9-42ea-ae40-32926a53ed9a-kube-api-access-62dj5\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-27k29\" (UID: \"b94ab7ee-11a9-42ea-ae40-32926a53ed9a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-27k29" Feb 02 17:39:42 crc kubenswrapper[4858]: I0202 17:39:42.261579 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b94ab7ee-11a9-42ea-ae40-32926a53ed9a-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-27k29\" (UID: \"b94ab7ee-11a9-42ea-ae40-32926a53ed9a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-27k29" Feb 02 17:39:42 crc kubenswrapper[4858]: I0202 17:39:42.261617 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b94ab7ee-11a9-42ea-ae40-32926a53ed9a-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-27k29\" (UID: \"b94ab7ee-11a9-42ea-ae40-32926a53ed9a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-27k29" Feb 02 17:39:42 crc kubenswrapper[4858]: I0202 17:39:42.267457 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b94ab7ee-11a9-42ea-ae40-32926a53ed9a-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-27k29\" (UID: \"b94ab7ee-11a9-42ea-ae40-32926a53ed9a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-27k29" Feb 02 17:39:42 crc kubenswrapper[4858]: I0202 17:39:42.275312 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b94ab7ee-11a9-42ea-ae40-32926a53ed9a-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-27k29\" (UID: \"b94ab7ee-11a9-42ea-ae40-32926a53ed9a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-27k29" Feb 02 17:39:42 crc kubenswrapper[4858]: I0202 17:39:42.288242 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62dj5\" (UniqueName: \"kubernetes.io/projected/b94ab7ee-11a9-42ea-ae40-32926a53ed9a-kube-api-access-62dj5\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-27k29\" (UID: \"b94ab7ee-11a9-42ea-ae40-32926a53ed9a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-27k29" Feb 02 17:39:42 crc kubenswrapper[4858]: I0202 17:39:42.341462 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-27k29" Feb 02 17:39:42 crc kubenswrapper[4858]: I0202 17:39:42.906153 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-27k29"] Feb 02 17:39:43 crc kubenswrapper[4858]: I0202 17:39:43.913828 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-27k29" event={"ID":"b94ab7ee-11a9-42ea-ae40-32926a53ed9a","Type":"ContainerStarted","Data":"39d61863d03b3c62e173a01604da15087f7cdd6aab82d0fc1e36cf775a26d754"} Feb 02 17:39:43 crc kubenswrapper[4858]: I0202 17:39:43.914452 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-27k29" event={"ID":"b94ab7ee-11a9-42ea-ae40-32926a53ed9a","Type":"ContainerStarted","Data":"8b8393a87c7f8c54d74ce800d7cf91af6fbc31196d54036e48aaa85bd548ed00"} Feb 02 17:39:43 crc kubenswrapper[4858]: I0202 17:39:43.934799 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-27k29" podStartSLOduration=2.481658895 podStartE2EDuration="2.934776366s" podCreationTimestamp="2026-02-02 17:39:41 +0000 UTC" firstStartedPulling="2026-02-02 17:39:42.914887914 +0000 UTC m=+1484.067303179" lastFinishedPulling="2026-02-02 17:39:43.368005375 +0000 UTC m=+1484.520420650" observedRunningTime="2026-02-02 17:39:43.926913313 +0000 UTC m=+1485.079328688" watchObservedRunningTime="2026-02-02 17:39:43.934776366 +0000 UTC m=+1485.087191641" Feb 02 17:40:19 crc kubenswrapper[4858]: I0202 17:40:19.053967 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-ddwfk"] Feb 02 17:40:19 crc kubenswrapper[4858]: I0202 17:40:19.064689 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-ddwfk"] Feb 02 17:40:20 crc kubenswrapper[4858]: I0202 17:40:20.042327 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-xf4m2"] Feb 02 17:40:20 crc kubenswrapper[4858]: I0202 17:40:20.061363 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-t8l7s"] Feb 02 17:40:20 crc kubenswrapper[4858]: I0202 17:40:20.073080 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-b266-account-create-update-zmm9r"] Feb 02 17:40:20 crc kubenswrapper[4858]: I0202 17:40:20.084401 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-xf4m2"] Feb 02 17:40:20 crc kubenswrapper[4858]: I0202 17:40:20.092309 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-e76f-account-create-update-ng2zq"] Feb 02 17:40:20 crc kubenswrapper[4858]: I0202 17:40:20.099571 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-0edf-account-create-update-drkj7"] Feb 02 17:40:20 crc kubenswrapper[4858]: I0202 17:40:20.107472 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-b266-account-create-update-zmm9r"] Feb 02 17:40:20 crc kubenswrapper[4858]: I0202 17:40:20.115322 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-t8l7s"] Feb 02 17:40:20 crc kubenswrapper[4858]: I0202 17:40:20.123363 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-e76f-account-create-update-ng2zq"] Feb 02 17:40:20 crc kubenswrapper[4858]: I0202 17:40:20.131348 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-0edf-account-create-update-drkj7"] Feb 02 17:40:20 crc kubenswrapper[4858]: I0202 17:40:20.412148 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17d7410b-4b1f-4a80-ab62-adf84d324b21" path="/var/lib/kubelet/pods/17d7410b-4b1f-4a80-ab62-adf84d324b21/volumes" Feb 02 17:40:20 crc kubenswrapper[4858]: I0202 17:40:20.412915 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="431fd5ca-da9e-4493-acf7-670eb92cf3aa" path="/var/lib/kubelet/pods/431fd5ca-da9e-4493-acf7-670eb92cf3aa/volumes" Feb 02 17:40:20 crc kubenswrapper[4858]: I0202 17:40:20.413424 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="757ea041-5d2d-4b24-9f27-ca5ee8116763" path="/var/lib/kubelet/pods/757ea041-5d2d-4b24-9f27-ca5ee8116763/volumes" Feb 02 17:40:20 crc kubenswrapper[4858]: I0202 17:40:20.413998 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8493a4cd-8a10-45f2-a063-4e0ce71de60f" path="/var/lib/kubelet/pods/8493a4cd-8a10-45f2-a063-4e0ce71de60f/volumes" Feb 02 17:40:20 crc kubenswrapper[4858]: I0202 17:40:20.415144 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac79238c-c640-41cc-b4a7-774e06727bb0" path="/var/lib/kubelet/pods/ac79238c-c640-41cc-b4a7-774e06727bb0/volumes" Feb 02 17:40:20 crc kubenswrapper[4858]: I0202 17:40:20.415674 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0e82b54-f2bd-4307-bd7a-f613c0dac23c" path="/var/lib/kubelet/pods/e0e82b54-f2bd-4307-bd7a-f613c0dac23c/volumes" Feb 02 17:40:43 crc kubenswrapper[4858]: I0202 17:40:43.047747 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-8rcln"] Feb 02 17:40:43 crc kubenswrapper[4858]: I0202 17:40:43.057594 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-8rcln"] Feb 02 17:40:44 crc kubenswrapper[4858]: I0202 17:40:44.422814 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8bf5ccf-5a03-4178-b13a-1134553abfcb" path="/var/lib/kubelet/pods/e8bf5ccf-5a03-4178-b13a-1134553abfcb/volumes" Feb 02 17:40:57 crc kubenswrapper[4858]: I0202 17:40:57.037357 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-cv9mb"] Feb 02 17:40:57 crc kubenswrapper[4858]: I0202 17:40:57.054590 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-cv9mb"] Feb 02 17:40:58 crc kubenswrapper[4858]: I0202 17:40:58.413225 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cb04b10-2484-4c41-903c-d12ea9ab3600" path="/var/lib/kubelet/pods/5cb04b10-2484-4c41-903c-d12ea9ab3600/volumes" Feb 02 17:41:01 crc kubenswrapper[4858]: I0202 17:41:01.038563 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-jmm6p"] Feb 02 17:41:01 crc kubenswrapper[4858]: I0202 17:41:01.047261 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-jmm6p"] Feb 02 17:41:02 crc kubenswrapper[4858]: I0202 17:41:02.029792 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-cc1b-account-create-update-tbdcd"] Feb 02 17:41:02 crc kubenswrapper[4858]: I0202 17:41:02.045455 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-kljgl"] Feb 02 17:41:02 crc kubenswrapper[4858]: I0202 17:41:02.055718 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-c58f-account-create-update-qp9nt"] Feb 02 17:41:02 crc kubenswrapper[4858]: I0202 17:41:02.066764 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-kljgl"] Feb 02 17:41:02 crc kubenswrapper[4858]: I0202 17:41:02.078512 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-cc1b-account-create-update-tbdcd"] Feb 02 17:41:02 crc kubenswrapper[4858]: I0202 17:41:02.087723 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-ps6sb"] Feb 02 17:41:02 crc kubenswrapper[4858]: I0202 17:41:02.095718 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-c58f-account-create-update-qp9nt"] Feb 02 17:41:02 crc kubenswrapper[4858]: I0202 17:41:02.104089 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-ps6sb"] Feb 02 17:41:02 crc kubenswrapper[4858]: I0202 17:41:02.111675 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-86ad-account-create-update-s4qdw"] Feb 02 17:41:02 crc kubenswrapper[4858]: I0202 17:41:02.119176 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-86ad-account-create-update-s4qdw"] Feb 02 17:41:02 crc kubenswrapper[4858]: I0202 17:41:02.412825 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="158f29d9-d8c9-47ee-912c-05108d7bec02" path="/var/lib/kubelet/pods/158f29d9-d8c9-47ee-912c-05108d7bec02/volumes" Feb 02 17:41:02 crc kubenswrapper[4858]: I0202 17:41:02.414267 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4eb91469-c691-4b21-a5d3-e422d2d36cc3" path="/var/lib/kubelet/pods/4eb91469-c691-4b21-a5d3-e422d2d36cc3/volumes" Feb 02 17:41:02 crc kubenswrapper[4858]: I0202 17:41:02.415121 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="509b5c9b-875d-410b-b427-d0ba51cf798c" path="/var/lib/kubelet/pods/509b5c9b-875d-410b-b427-d0ba51cf798c/volumes" Feb 02 17:41:02 crc kubenswrapper[4858]: I0202 17:41:02.415935 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a82f861-3468-4057-851d-05836166f30b" path="/var/lib/kubelet/pods/8a82f861-3468-4057-851d-05836166f30b/volumes" Feb 02 17:41:02 crc kubenswrapper[4858]: I0202 17:41:02.417322 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="907ec6b3-b751-400a-95ea-e69381ac7785" path="/var/lib/kubelet/pods/907ec6b3-b751-400a-95ea-e69381ac7785/volumes" Feb 02 17:41:02 crc kubenswrapper[4858]: I0202 17:41:02.417891 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4449167-7d55-4675-9dd6-20094b472bd0" path="/var/lib/kubelet/pods/a4449167-7d55-4675-9dd6-20094b472bd0/volumes" Feb 02 17:41:08 crc kubenswrapper[4858]: I0202 17:41:08.343088 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w6kgt"] Feb 02 17:41:08 crc kubenswrapper[4858]: I0202 17:41:08.347237 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6kgt" Feb 02 17:41:08 crc kubenswrapper[4858]: I0202 17:41:08.375153 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7249a924-17cd-4ee6-b7df-709372c90d10-utilities\") pod \"certified-operators-w6kgt\" (UID: \"7249a924-17cd-4ee6-b7df-709372c90d10\") " pod="openshift-marketplace/certified-operators-w6kgt" Feb 02 17:41:08 crc kubenswrapper[4858]: I0202 17:41:08.375554 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7249a924-17cd-4ee6-b7df-709372c90d10-catalog-content\") pod \"certified-operators-w6kgt\" (UID: \"7249a924-17cd-4ee6-b7df-709372c90d10\") " pod="openshift-marketplace/certified-operators-w6kgt" Feb 02 17:41:08 crc kubenswrapper[4858]: I0202 17:41:08.375620 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zzfz\" (UniqueName: \"kubernetes.io/projected/7249a924-17cd-4ee6-b7df-709372c90d10-kube-api-access-7zzfz\") pod \"certified-operators-w6kgt\" (UID: \"7249a924-17cd-4ee6-b7df-709372c90d10\") " pod="openshift-marketplace/certified-operators-w6kgt" Feb 02 17:41:08 crc kubenswrapper[4858]: I0202 17:41:08.392574 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w6kgt"] Feb 02 17:41:08 crc kubenswrapper[4858]: I0202 17:41:08.477683 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7249a924-17cd-4ee6-b7df-709372c90d10-catalog-content\") pod \"certified-operators-w6kgt\" (UID: \"7249a924-17cd-4ee6-b7df-709372c90d10\") " pod="openshift-marketplace/certified-operators-w6kgt" Feb 02 17:41:08 crc kubenswrapper[4858]: I0202 17:41:08.477806 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zzfz\" (UniqueName: \"kubernetes.io/projected/7249a924-17cd-4ee6-b7df-709372c90d10-kube-api-access-7zzfz\") pod \"certified-operators-w6kgt\" (UID: \"7249a924-17cd-4ee6-b7df-709372c90d10\") " pod="openshift-marketplace/certified-operators-w6kgt" Feb 02 17:41:08 crc kubenswrapper[4858]: I0202 17:41:08.477935 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7249a924-17cd-4ee6-b7df-709372c90d10-utilities\") pod \"certified-operators-w6kgt\" (UID: \"7249a924-17cd-4ee6-b7df-709372c90d10\") " pod="openshift-marketplace/certified-operators-w6kgt" Feb 02 17:41:08 crc kubenswrapper[4858]: I0202 17:41:08.478578 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7249a924-17cd-4ee6-b7df-709372c90d10-utilities\") pod \"certified-operators-w6kgt\" (UID: \"7249a924-17cd-4ee6-b7df-709372c90d10\") " pod="openshift-marketplace/certified-operators-w6kgt" Feb 02 17:41:08 crc kubenswrapper[4858]: I0202 17:41:08.478900 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7249a924-17cd-4ee6-b7df-709372c90d10-catalog-content\") pod \"certified-operators-w6kgt\" (UID: \"7249a924-17cd-4ee6-b7df-709372c90d10\") " pod="openshift-marketplace/certified-operators-w6kgt" Feb 02 17:41:08 crc kubenswrapper[4858]: I0202 17:41:08.502763 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zzfz\" (UniqueName: \"kubernetes.io/projected/7249a924-17cd-4ee6-b7df-709372c90d10-kube-api-access-7zzfz\") pod \"certified-operators-w6kgt\" (UID: \"7249a924-17cd-4ee6-b7df-709372c90d10\") " pod="openshift-marketplace/certified-operators-w6kgt" Feb 02 17:41:08 crc kubenswrapper[4858]: I0202 17:41:08.737416 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6kgt" Feb 02 17:41:09 crc kubenswrapper[4858]: I0202 17:41:09.159529 4858 scope.go:117] "RemoveContainer" containerID="a6dbe3f5701a69a0f031b74e22ad4f6908bfb6e1670d155273ad4679174510fb" Feb 02 17:41:09 crc kubenswrapper[4858]: I0202 17:41:09.192488 4858 scope.go:117] "RemoveContainer" containerID="d9388121faaf5987db32569578282ee74c81c1ac0cbfcf073caa9f9ba6b4ebeb" Feb 02 17:41:09 crc kubenswrapper[4858]: I0202 17:41:09.227826 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w6kgt"] Feb 02 17:41:09 crc kubenswrapper[4858]: I0202 17:41:09.254014 4858 scope.go:117] "RemoveContainer" containerID="ee041bd0bf2041a6b27e8b3f93d821b40926a93ecd373f41251083013bc947ee" Feb 02 17:41:09 crc kubenswrapper[4858]: I0202 17:41:09.299095 4858 scope.go:117] "RemoveContainer" containerID="1006f534a6c5a1f39109ab71d7511e7397d311ee5b16765bfe64f9869ee3d37b" Feb 02 17:41:09 crc kubenswrapper[4858]: I0202 17:41:09.318201 4858 scope.go:117] "RemoveContainer" containerID="8a6f57c243bb6007ecee76ea14af508d63960756875d320370d8c3d35bebcae4" Feb 02 17:41:09 crc kubenswrapper[4858]: I0202 17:41:09.346825 4858 scope.go:117] "RemoveContainer" containerID="ef2ff70d7a71512ab5695ceefb57ca6f23c8b82e7c6dfce5477a8ba990632089" Feb 02 17:41:09 crc kubenswrapper[4858]: I0202 17:41:09.381001 4858 scope.go:117] "RemoveContainer" containerID="a7c6251270468a60f8437db0f238d31075f1433aaa75c717687708698d74c638" Feb 02 17:41:09 crc kubenswrapper[4858]: I0202 17:41:09.426220 4858 scope.go:117] "RemoveContainer" containerID="c2a81f5dc1c87004feb9dc6e964a565898639418a7b102cf00fbd3515fa9bd35" Feb 02 17:41:09 crc kubenswrapper[4858]: I0202 17:41:09.449091 4858 scope.go:117] "RemoveContainer" containerID="546d7bdbe1486b73dce958069c911d4d02dcc15467347a6cfa28b921fc78a38a" Feb 02 17:41:09 crc kubenswrapper[4858]: I0202 17:41:09.585833 4858 scope.go:117] "RemoveContainer" containerID="4ecb52312985551098962a856e67798ee0de396267174ead4eb05e996dd98102" Feb 02 17:41:09 crc kubenswrapper[4858]: I0202 17:41:09.631494 4858 scope.go:117] "RemoveContainer" containerID="44c21ee6f8c701a9fe9b1acd0e84567e27b279fc2646e15b35a2280ace539874" Feb 02 17:41:09 crc kubenswrapper[4858]: I0202 17:41:09.710194 4858 scope.go:117] "RemoveContainer" containerID="034d05c68459e8bff73de37a62c72187f51a0b4a69827fe49454afefa42f13f2" Feb 02 17:41:09 crc kubenswrapper[4858]: I0202 17:41:09.731840 4858 scope.go:117] "RemoveContainer" containerID="035639d1ccc51a5f4036284ea2cfc6ceffcb99a7152c499bf1e7844b070fcc4f" Feb 02 17:41:09 crc kubenswrapper[4858]: I0202 17:41:09.761843 4858 generic.go:334] "Generic (PLEG): container finished" podID="7249a924-17cd-4ee6-b7df-709372c90d10" containerID="c24650896a63454526bf0b734adb42783c45de1b4dc76bb66a028d26d9ba7f8c" exitCode=0 Feb 02 17:41:09 crc kubenswrapper[4858]: I0202 17:41:09.761945 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6kgt" event={"ID":"7249a924-17cd-4ee6-b7df-709372c90d10","Type":"ContainerDied","Data":"c24650896a63454526bf0b734adb42783c45de1b4dc76bb66a028d26d9ba7f8c"} Feb 02 17:41:09 crc kubenswrapper[4858]: I0202 17:41:09.762008 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6kgt" event={"ID":"7249a924-17cd-4ee6-b7df-709372c90d10","Type":"ContainerStarted","Data":"29d36850f264e6b324171f30b7c37e2e6b6df410f0dc6d733ac5ac229f3723a9"} Feb 02 17:41:09 crc kubenswrapper[4858]: I0202 17:41:09.768478 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 17:41:09 crc kubenswrapper[4858]: I0202 17:41:09.782268 4858 scope.go:117] "RemoveContainer" containerID="dbae0a01ba724a0aa7dfcd6eccc8610d11cb6642d9ecfd5b3c127c88fa17ccd1" Feb 02 17:41:10 crc kubenswrapper[4858]: I0202 17:41:10.050773 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-n6lz8"] Feb 02 17:41:10 crc kubenswrapper[4858]: I0202 17:41:10.058419 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-n6lz8"] Feb 02 17:41:10 crc kubenswrapper[4858]: I0202 17:41:10.413856 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="668a330f-e46e-4be4-9d42-2a547988e82b" path="/var/lib/kubelet/pods/668a330f-e46e-4be4-9d42-2a547988e82b/volumes" Feb 02 17:41:10 crc kubenswrapper[4858]: I0202 17:41:10.773135 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6kgt" event={"ID":"7249a924-17cd-4ee6-b7df-709372c90d10","Type":"ContainerStarted","Data":"56eb9690ba5d9429f6fd5dc4abec27e1209636e93ae64bf088351055e5c08f9e"} Feb 02 17:41:11 crc kubenswrapper[4858]: I0202 17:41:11.782741 4858 generic.go:334] "Generic (PLEG): container finished" podID="7249a924-17cd-4ee6-b7df-709372c90d10" containerID="56eb9690ba5d9429f6fd5dc4abec27e1209636e93ae64bf088351055e5c08f9e" exitCode=0 Feb 02 17:41:11 crc kubenswrapper[4858]: I0202 17:41:11.782805 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6kgt" event={"ID":"7249a924-17cd-4ee6-b7df-709372c90d10","Type":"ContainerDied","Data":"56eb9690ba5d9429f6fd5dc4abec27e1209636e93ae64bf088351055e5c08f9e"} Feb 02 17:41:12 crc kubenswrapper[4858]: I0202 17:41:12.795356 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6kgt" event={"ID":"7249a924-17cd-4ee6-b7df-709372c90d10","Type":"ContainerStarted","Data":"26807aaf26979618b2239d9b6a7d8ab17e888dcd1e7f6dbbc5cb8d01a312613e"} Feb 02 17:41:12 crc kubenswrapper[4858]: I0202 17:41:12.817270 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w6kgt" podStartSLOduration=2.395423507 podStartE2EDuration="4.81724629s" podCreationTimestamp="2026-02-02 17:41:08 +0000 UTC" firstStartedPulling="2026-02-02 17:41:09.768240549 +0000 UTC m=+1570.920655814" lastFinishedPulling="2026-02-02 17:41:12.190063332 +0000 UTC m=+1573.342478597" observedRunningTime="2026-02-02 17:41:12.812416673 +0000 UTC m=+1573.964831938" watchObservedRunningTime="2026-02-02 17:41:12.81724629 +0000 UTC m=+1573.969661555" Feb 02 17:41:13 crc kubenswrapper[4858]: I0202 17:41:13.111517 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kqm9l"] Feb 02 17:41:13 crc kubenswrapper[4858]: I0202 17:41:13.114930 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kqm9l" Feb 02 17:41:13 crc kubenswrapper[4858]: I0202 17:41:13.127221 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kqm9l"] Feb 02 17:41:13 crc kubenswrapper[4858]: I0202 17:41:13.169441 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/544f1bdd-cae9-4c0c-bf4a-7500702f1ee4-catalog-content\") pod \"redhat-operators-kqm9l\" (UID: \"544f1bdd-cae9-4c0c-bf4a-7500702f1ee4\") " pod="openshift-marketplace/redhat-operators-kqm9l" Feb 02 17:41:13 crc kubenswrapper[4858]: I0202 17:41:13.169538 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/544f1bdd-cae9-4c0c-bf4a-7500702f1ee4-utilities\") pod \"redhat-operators-kqm9l\" (UID: \"544f1bdd-cae9-4c0c-bf4a-7500702f1ee4\") " pod="openshift-marketplace/redhat-operators-kqm9l" Feb 02 17:41:13 crc kubenswrapper[4858]: I0202 17:41:13.169757 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlgsx\" (UniqueName: \"kubernetes.io/projected/544f1bdd-cae9-4c0c-bf4a-7500702f1ee4-kube-api-access-qlgsx\") pod \"redhat-operators-kqm9l\" (UID: \"544f1bdd-cae9-4c0c-bf4a-7500702f1ee4\") " pod="openshift-marketplace/redhat-operators-kqm9l" Feb 02 17:41:13 crc kubenswrapper[4858]: I0202 17:41:13.271951 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/544f1bdd-cae9-4c0c-bf4a-7500702f1ee4-catalog-content\") pod \"redhat-operators-kqm9l\" (UID: \"544f1bdd-cae9-4c0c-bf4a-7500702f1ee4\") " pod="openshift-marketplace/redhat-operators-kqm9l" Feb 02 17:41:13 crc kubenswrapper[4858]: I0202 17:41:13.272112 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/544f1bdd-cae9-4c0c-bf4a-7500702f1ee4-utilities\") pod \"redhat-operators-kqm9l\" (UID: \"544f1bdd-cae9-4c0c-bf4a-7500702f1ee4\") " pod="openshift-marketplace/redhat-operators-kqm9l" Feb 02 17:41:13 crc kubenswrapper[4858]: I0202 17:41:13.272167 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlgsx\" (UniqueName: \"kubernetes.io/projected/544f1bdd-cae9-4c0c-bf4a-7500702f1ee4-kube-api-access-qlgsx\") pod \"redhat-operators-kqm9l\" (UID: \"544f1bdd-cae9-4c0c-bf4a-7500702f1ee4\") " pod="openshift-marketplace/redhat-operators-kqm9l" Feb 02 17:41:13 crc kubenswrapper[4858]: I0202 17:41:13.272592 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/544f1bdd-cae9-4c0c-bf4a-7500702f1ee4-catalog-content\") pod \"redhat-operators-kqm9l\" (UID: \"544f1bdd-cae9-4c0c-bf4a-7500702f1ee4\") " pod="openshift-marketplace/redhat-operators-kqm9l" Feb 02 17:41:13 crc kubenswrapper[4858]: I0202 17:41:13.272640 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/544f1bdd-cae9-4c0c-bf4a-7500702f1ee4-utilities\") pod \"redhat-operators-kqm9l\" (UID: \"544f1bdd-cae9-4c0c-bf4a-7500702f1ee4\") " pod="openshift-marketplace/redhat-operators-kqm9l" Feb 02 17:41:13 crc kubenswrapper[4858]: I0202 17:41:13.303030 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlgsx\" (UniqueName: \"kubernetes.io/projected/544f1bdd-cae9-4c0c-bf4a-7500702f1ee4-kube-api-access-qlgsx\") pod \"redhat-operators-kqm9l\" (UID: \"544f1bdd-cae9-4c0c-bf4a-7500702f1ee4\") " pod="openshift-marketplace/redhat-operators-kqm9l" Feb 02 17:41:13 crc kubenswrapper[4858]: I0202 17:41:13.441727 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kqm9l" Feb 02 17:41:13 crc kubenswrapper[4858]: I0202 17:41:13.937631 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kqm9l"] Feb 02 17:41:13 crc kubenswrapper[4858]: W0202 17:41:13.944384 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod544f1bdd_cae9_4c0c_bf4a_7500702f1ee4.slice/crio-27cd8e1e381ee88b4297d9ef7b9311b4976f427254cc403461f1b674c382390a WatchSource:0}: Error finding container 27cd8e1e381ee88b4297d9ef7b9311b4976f427254cc403461f1b674c382390a: Status 404 returned error can't find the container with id 27cd8e1e381ee88b4297d9ef7b9311b4976f427254cc403461f1b674c382390a Feb 02 17:41:14 crc kubenswrapper[4858]: I0202 17:41:14.816654 4858 generic.go:334] "Generic (PLEG): container finished" podID="544f1bdd-cae9-4c0c-bf4a-7500702f1ee4" containerID="9fc96ea15aa79b771b339eaa3f76fd6d214b6fb2929548deb43b63b3230c8166" exitCode=0 Feb 02 17:41:14 crc kubenswrapper[4858]: I0202 17:41:14.816745 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kqm9l" event={"ID":"544f1bdd-cae9-4c0c-bf4a-7500702f1ee4","Type":"ContainerDied","Data":"9fc96ea15aa79b771b339eaa3f76fd6d214b6fb2929548deb43b63b3230c8166"} Feb 02 17:41:14 crc kubenswrapper[4858]: I0202 17:41:14.817362 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kqm9l" event={"ID":"544f1bdd-cae9-4c0c-bf4a-7500702f1ee4","Type":"ContainerStarted","Data":"27cd8e1e381ee88b4297d9ef7b9311b4976f427254cc403461f1b674c382390a"} Feb 02 17:41:15 crc kubenswrapper[4858]: I0202 17:41:15.826619 4858 generic.go:334] "Generic (PLEG): container finished" podID="b94ab7ee-11a9-42ea-ae40-32926a53ed9a" containerID="39d61863d03b3c62e173a01604da15087f7cdd6aab82d0fc1e36cf775a26d754" exitCode=0 Feb 02 17:41:15 crc kubenswrapper[4858]: I0202 17:41:15.826697 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-27k29" event={"ID":"b94ab7ee-11a9-42ea-ae40-32926a53ed9a","Type":"ContainerDied","Data":"39d61863d03b3c62e173a01604da15087f7cdd6aab82d0fc1e36cf775a26d754"} Feb 02 17:41:16 crc kubenswrapper[4858]: I0202 17:41:16.839745 4858 generic.go:334] "Generic (PLEG): container finished" podID="544f1bdd-cae9-4c0c-bf4a-7500702f1ee4" containerID="2a4e468c5b9d23244d310984548ca0e6138892c883b73b13e3c0150644b6855d" exitCode=0 Feb 02 17:41:16 crc kubenswrapper[4858]: I0202 17:41:16.839795 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kqm9l" event={"ID":"544f1bdd-cae9-4c0c-bf4a-7500702f1ee4","Type":"ContainerDied","Data":"2a4e468c5b9d23244d310984548ca0e6138892c883b73b13e3c0150644b6855d"} Feb 02 17:41:17 crc kubenswrapper[4858]: I0202 17:41:17.442307 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-27k29" Feb 02 17:41:17 crc kubenswrapper[4858]: I0202 17:41:17.547096 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b94ab7ee-11a9-42ea-ae40-32926a53ed9a-inventory\") pod \"b94ab7ee-11a9-42ea-ae40-32926a53ed9a\" (UID: \"b94ab7ee-11a9-42ea-ae40-32926a53ed9a\") " Feb 02 17:41:17 crc kubenswrapper[4858]: I0202 17:41:17.547208 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b94ab7ee-11a9-42ea-ae40-32926a53ed9a-ssh-key-openstack-edpm-ipam\") pod \"b94ab7ee-11a9-42ea-ae40-32926a53ed9a\" (UID: \"b94ab7ee-11a9-42ea-ae40-32926a53ed9a\") " Feb 02 17:41:17 crc kubenswrapper[4858]: I0202 17:41:17.547242 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62dj5\" (UniqueName: \"kubernetes.io/projected/b94ab7ee-11a9-42ea-ae40-32926a53ed9a-kube-api-access-62dj5\") pod \"b94ab7ee-11a9-42ea-ae40-32926a53ed9a\" (UID: \"b94ab7ee-11a9-42ea-ae40-32926a53ed9a\") " Feb 02 17:41:17 crc kubenswrapper[4858]: I0202 17:41:17.553697 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b94ab7ee-11a9-42ea-ae40-32926a53ed9a-kube-api-access-62dj5" (OuterVolumeSpecName: "kube-api-access-62dj5") pod "b94ab7ee-11a9-42ea-ae40-32926a53ed9a" (UID: "b94ab7ee-11a9-42ea-ae40-32926a53ed9a"). InnerVolumeSpecName "kube-api-access-62dj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:41:17 crc kubenswrapper[4858]: I0202 17:41:17.578327 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b94ab7ee-11a9-42ea-ae40-32926a53ed9a-inventory" (OuterVolumeSpecName: "inventory") pod "b94ab7ee-11a9-42ea-ae40-32926a53ed9a" (UID: "b94ab7ee-11a9-42ea-ae40-32926a53ed9a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:41:17 crc kubenswrapper[4858]: I0202 17:41:17.579567 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b94ab7ee-11a9-42ea-ae40-32926a53ed9a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b94ab7ee-11a9-42ea-ae40-32926a53ed9a" (UID: "b94ab7ee-11a9-42ea-ae40-32926a53ed9a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:41:17 crc kubenswrapper[4858]: I0202 17:41:17.649471 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b94ab7ee-11a9-42ea-ae40-32926a53ed9a-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 17:41:17 crc kubenswrapper[4858]: I0202 17:41:17.649518 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b94ab7ee-11a9-42ea-ae40-32926a53ed9a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 17:41:17 crc kubenswrapper[4858]: I0202 17:41:17.649532 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62dj5\" (UniqueName: \"kubernetes.io/projected/b94ab7ee-11a9-42ea-ae40-32926a53ed9a-kube-api-access-62dj5\") on node \"crc\" DevicePath \"\"" Feb 02 17:41:17 crc kubenswrapper[4858]: I0202 17:41:17.851025 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-27k29" event={"ID":"b94ab7ee-11a9-42ea-ae40-32926a53ed9a","Type":"ContainerDied","Data":"8b8393a87c7f8c54d74ce800d7cf91af6fbc31196d54036e48aaa85bd548ed00"} Feb 02 17:41:17 crc kubenswrapper[4858]: I0202 17:41:17.851356 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b8393a87c7f8c54d74ce800d7cf91af6fbc31196d54036e48aaa85bd548ed00" Feb 02 17:41:17 crc kubenswrapper[4858]: I0202 17:41:17.851427 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-27k29" Feb 02 17:41:17 crc kubenswrapper[4858]: I0202 17:41:17.932006 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6h2qq"] Feb 02 17:41:17 crc kubenswrapper[4858]: E0202 17:41:17.932568 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b94ab7ee-11a9-42ea-ae40-32926a53ed9a" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 02 17:41:17 crc kubenswrapper[4858]: I0202 17:41:17.932597 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b94ab7ee-11a9-42ea-ae40-32926a53ed9a" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 02 17:41:17 crc kubenswrapper[4858]: I0202 17:41:17.932835 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="b94ab7ee-11a9-42ea-ae40-32926a53ed9a" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 02 17:41:17 crc kubenswrapper[4858]: I0202 17:41:17.933701 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6h2qq" Feb 02 17:41:17 crc kubenswrapper[4858]: I0202 17:41:17.936859 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 17:41:17 crc kubenswrapper[4858]: I0202 17:41:17.936954 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 17:41:17 crc kubenswrapper[4858]: I0202 17:41:17.937158 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q7l94" Feb 02 17:41:17 crc kubenswrapper[4858]: I0202 17:41:17.937844 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 17:41:17 crc kubenswrapper[4858]: I0202 17:41:17.942469 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6h2qq"] Feb 02 17:41:18 crc kubenswrapper[4858]: I0202 17:41:18.056385 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07f60796-9efa-4245-955f-14c0c16c918d-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6h2qq\" (UID: \"07f60796-9efa-4245-955f-14c0c16c918d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6h2qq" Feb 02 17:41:18 crc kubenswrapper[4858]: I0202 17:41:18.056869 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmq76\" (UniqueName: \"kubernetes.io/projected/07f60796-9efa-4245-955f-14c0c16c918d-kube-api-access-xmq76\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6h2qq\" (UID: \"07f60796-9efa-4245-955f-14c0c16c918d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6h2qq" Feb 02 17:41:18 crc kubenswrapper[4858]: I0202 17:41:18.057077 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07f60796-9efa-4245-955f-14c0c16c918d-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6h2qq\" (UID: \"07f60796-9efa-4245-955f-14c0c16c918d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6h2qq" Feb 02 17:41:18 crc kubenswrapper[4858]: I0202 17:41:18.159057 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07f60796-9efa-4245-955f-14c0c16c918d-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6h2qq\" (UID: \"07f60796-9efa-4245-955f-14c0c16c918d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6h2qq" Feb 02 17:41:18 crc kubenswrapper[4858]: I0202 17:41:18.159208 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmq76\" (UniqueName: \"kubernetes.io/projected/07f60796-9efa-4245-955f-14c0c16c918d-kube-api-access-xmq76\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6h2qq\" (UID: \"07f60796-9efa-4245-955f-14c0c16c918d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6h2qq" Feb 02 17:41:18 crc kubenswrapper[4858]: I0202 17:41:18.159262 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07f60796-9efa-4245-955f-14c0c16c918d-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6h2qq\" (UID: \"07f60796-9efa-4245-955f-14c0c16c918d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6h2qq" Feb 02 17:41:18 crc kubenswrapper[4858]: I0202 17:41:18.162998 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07f60796-9efa-4245-955f-14c0c16c918d-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6h2qq\" (UID: \"07f60796-9efa-4245-955f-14c0c16c918d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6h2qq" Feb 02 17:41:18 crc kubenswrapper[4858]: I0202 17:41:18.163964 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07f60796-9efa-4245-955f-14c0c16c918d-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6h2qq\" (UID: \"07f60796-9efa-4245-955f-14c0c16c918d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6h2qq" Feb 02 17:41:18 crc kubenswrapper[4858]: I0202 17:41:18.182537 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmq76\" (UniqueName: \"kubernetes.io/projected/07f60796-9efa-4245-955f-14c0c16c918d-kube-api-access-xmq76\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6h2qq\" (UID: \"07f60796-9efa-4245-955f-14c0c16c918d\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6h2qq" Feb 02 17:41:18 crc kubenswrapper[4858]: I0202 17:41:18.248373 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6h2qq" Feb 02 17:41:18 crc kubenswrapper[4858]: I0202 17:41:18.738562 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w6kgt" Feb 02 17:41:18 crc kubenswrapper[4858]: I0202 17:41:18.740705 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w6kgt" Feb 02 17:41:18 crc kubenswrapper[4858]: I0202 17:41:18.748924 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6h2qq"] Feb 02 17:41:18 crc kubenswrapper[4858]: W0202 17:41:18.757947 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07f60796_9efa_4245_955f_14c0c16c918d.slice/crio-eb791fd76451e501e0c6c3296ff2a4a3301becb17f21a52cafabe37d736824d8 WatchSource:0}: Error finding container eb791fd76451e501e0c6c3296ff2a4a3301becb17f21a52cafabe37d736824d8: Status 404 returned error can't find the container with id eb791fd76451e501e0c6c3296ff2a4a3301becb17f21a52cafabe37d736824d8 Feb 02 17:41:18 crc kubenswrapper[4858]: I0202 17:41:18.796151 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w6kgt" Feb 02 17:41:18 crc kubenswrapper[4858]: I0202 17:41:18.876494 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6h2qq" event={"ID":"07f60796-9efa-4245-955f-14c0c16c918d","Type":"ContainerStarted","Data":"eb791fd76451e501e0c6c3296ff2a4a3301becb17f21a52cafabe37d736824d8"} Feb 02 17:41:18 crc kubenswrapper[4858]: I0202 17:41:18.879361 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kqm9l" event={"ID":"544f1bdd-cae9-4c0c-bf4a-7500702f1ee4","Type":"ContainerStarted","Data":"cacccca7d8cbd2460da0083d5daf45410831d58db75fd9a9c7a85f2817b81307"} Feb 02 17:41:18 crc kubenswrapper[4858]: I0202 17:41:18.906837 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kqm9l" podStartSLOduration=3.073617253 podStartE2EDuration="5.90681217s" podCreationTimestamp="2026-02-02 17:41:13 +0000 UTC" firstStartedPulling="2026-02-02 17:41:14.821187353 +0000 UTC m=+1575.973602618" lastFinishedPulling="2026-02-02 17:41:17.65438226 +0000 UTC m=+1578.806797535" observedRunningTime="2026-02-02 17:41:18.89764192 +0000 UTC m=+1580.050057195" watchObservedRunningTime="2026-02-02 17:41:18.90681217 +0000 UTC m=+1580.059227455" Feb 02 17:41:18 crc kubenswrapper[4858]: I0202 17:41:18.921328 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w6kgt" Feb 02 17:41:19 crc kubenswrapper[4858]: I0202 17:41:19.899457 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6h2qq" event={"ID":"07f60796-9efa-4245-955f-14c0c16c918d","Type":"ContainerStarted","Data":"24dad98c8692b4b2e215f9751f8ef05ee7451e38ec062be21e31788387171964"} Feb 02 17:41:19 crc kubenswrapper[4858]: I0202 17:41:19.947455 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6h2qq" podStartSLOduration=2.364817799 podStartE2EDuration="2.947430261s" podCreationTimestamp="2026-02-02 17:41:17 +0000 UTC" firstStartedPulling="2026-02-02 17:41:18.76081049 +0000 UTC m=+1579.913225755" lastFinishedPulling="2026-02-02 17:41:19.343422952 +0000 UTC m=+1580.495838217" observedRunningTime="2026-02-02 17:41:19.943936122 +0000 UTC m=+1581.096351397" watchObservedRunningTime="2026-02-02 17:41:19.947430261 +0000 UTC m=+1581.099845526" Feb 02 17:41:20 crc kubenswrapper[4858]: I0202 17:41:20.894312 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w6kgt"] Feb 02 17:41:20 crc kubenswrapper[4858]: I0202 17:41:20.907208 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w6kgt" podUID="7249a924-17cd-4ee6-b7df-709372c90d10" containerName="registry-server" containerID="cri-o://26807aaf26979618b2239d9b6a7d8ab17e888dcd1e7f6dbbc5cb8d01a312613e" gracePeriod=2 Feb 02 17:41:21 crc kubenswrapper[4858]: I0202 17:41:21.354421 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6kgt" Feb 02 17:41:21 crc kubenswrapper[4858]: I0202 17:41:21.472955 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7249a924-17cd-4ee6-b7df-709372c90d10-utilities\") pod \"7249a924-17cd-4ee6-b7df-709372c90d10\" (UID: \"7249a924-17cd-4ee6-b7df-709372c90d10\") " Feb 02 17:41:21 crc kubenswrapper[4858]: I0202 17:41:21.473157 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zzfz\" (UniqueName: \"kubernetes.io/projected/7249a924-17cd-4ee6-b7df-709372c90d10-kube-api-access-7zzfz\") pod \"7249a924-17cd-4ee6-b7df-709372c90d10\" (UID: \"7249a924-17cd-4ee6-b7df-709372c90d10\") " Feb 02 17:41:21 crc kubenswrapper[4858]: I0202 17:41:21.473185 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7249a924-17cd-4ee6-b7df-709372c90d10-catalog-content\") pod \"7249a924-17cd-4ee6-b7df-709372c90d10\" (UID: \"7249a924-17cd-4ee6-b7df-709372c90d10\") " Feb 02 17:41:21 crc kubenswrapper[4858]: I0202 17:41:21.474531 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7249a924-17cd-4ee6-b7df-709372c90d10-utilities" (OuterVolumeSpecName: "utilities") pod "7249a924-17cd-4ee6-b7df-709372c90d10" (UID: "7249a924-17cd-4ee6-b7df-709372c90d10"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:41:21 crc kubenswrapper[4858]: I0202 17:41:21.481383 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7249a924-17cd-4ee6-b7df-709372c90d10-kube-api-access-7zzfz" (OuterVolumeSpecName: "kube-api-access-7zzfz") pod "7249a924-17cd-4ee6-b7df-709372c90d10" (UID: "7249a924-17cd-4ee6-b7df-709372c90d10"). InnerVolumeSpecName "kube-api-access-7zzfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:41:21 crc kubenswrapper[4858]: I0202 17:41:21.520804 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7249a924-17cd-4ee6-b7df-709372c90d10-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7249a924-17cd-4ee6-b7df-709372c90d10" (UID: "7249a924-17cd-4ee6-b7df-709372c90d10"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:41:21 crc kubenswrapper[4858]: I0202 17:41:21.576695 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7249a924-17cd-4ee6-b7df-709372c90d10-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 17:41:21 crc kubenswrapper[4858]: I0202 17:41:21.576731 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zzfz\" (UniqueName: \"kubernetes.io/projected/7249a924-17cd-4ee6-b7df-709372c90d10-kube-api-access-7zzfz\") on node \"crc\" DevicePath \"\"" Feb 02 17:41:21 crc kubenswrapper[4858]: I0202 17:41:21.576745 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7249a924-17cd-4ee6-b7df-709372c90d10-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 17:41:21 crc kubenswrapper[4858]: I0202 17:41:21.924574 4858 generic.go:334] "Generic (PLEG): container finished" podID="7249a924-17cd-4ee6-b7df-709372c90d10" containerID="26807aaf26979618b2239d9b6a7d8ab17e888dcd1e7f6dbbc5cb8d01a312613e" exitCode=0 Feb 02 17:41:21 crc kubenswrapper[4858]: I0202 17:41:21.924628 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6kgt" event={"ID":"7249a924-17cd-4ee6-b7df-709372c90d10","Type":"ContainerDied","Data":"26807aaf26979618b2239d9b6a7d8ab17e888dcd1e7f6dbbc5cb8d01a312613e"} Feb 02 17:41:21 crc kubenswrapper[4858]: I0202 17:41:21.924660 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6kgt" event={"ID":"7249a924-17cd-4ee6-b7df-709372c90d10","Type":"ContainerDied","Data":"29d36850f264e6b324171f30b7c37e2e6b6df410f0dc6d733ac5ac229f3723a9"} Feb 02 17:41:21 crc kubenswrapper[4858]: I0202 17:41:21.924776 4858 scope.go:117] "RemoveContainer" containerID="26807aaf26979618b2239d9b6a7d8ab17e888dcd1e7f6dbbc5cb8d01a312613e" Feb 02 17:41:21 crc kubenswrapper[4858]: I0202 17:41:21.924936 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6kgt" Feb 02 17:41:21 crc kubenswrapper[4858]: I0202 17:41:21.953222 4858 scope.go:117] "RemoveContainer" containerID="56eb9690ba5d9429f6fd5dc4abec27e1209636e93ae64bf088351055e5c08f9e" Feb 02 17:41:21 crc kubenswrapper[4858]: I0202 17:41:21.957046 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w6kgt"] Feb 02 17:41:21 crc kubenswrapper[4858]: I0202 17:41:21.979844 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w6kgt"] Feb 02 17:41:21 crc kubenswrapper[4858]: I0202 17:41:21.997149 4858 scope.go:117] "RemoveContainer" containerID="c24650896a63454526bf0b734adb42783c45de1b4dc76bb66a028d26d9ba7f8c" Feb 02 17:41:22 crc kubenswrapper[4858]: I0202 17:41:22.037851 4858 scope.go:117] "RemoveContainer" containerID="26807aaf26979618b2239d9b6a7d8ab17e888dcd1e7f6dbbc5cb8d01a312613e" Feb 02 17:41:22 crc kubenswrapper[4858]: E0202 17:41:22.040813 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26807aaf26979618b2239d9b6a7d8ab17e888dcd1e7f6dbbc5cb8d01a312613e\": container with ID starting with 26807aaf26979618b2239d9b6a7d8ab17e888dcd1e7f6dbbc5cb8d01a312613e not found: ID does not exist" containerID="26807aaf26979618b2239d9b6a7d8ab17e888dcd1e7f6dbbc5cb8d01a312613e" Feb 02 17:41:22 crc kubenswrapper[4858]: I0202 17:41:22.040859 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26807aaf26979618b2239d9b6a7d8ab17e888dcd1e7f6dbbc5cb8d01a312613e"} err="failed to get container status \"26807aaf26979618b2239d9b6a7d8ab17e888dcd1e7f6dbbc5cb8d01a312613e\": rpc error: code = NotFound desc = could not find container \"26807aaf26979618b2239d9b6a7d8ab17e888dcd1e7f6dbbc5cb8d01a312613e\": container with ID starting with 26807aaf26979618b2239d9b6a7d8ab17e888dcd1e7f6dbbc5cb8d01a312613e not found: ID does not exist" Feb 02 17:41:22 crc kubenswrapper[4858]: I0202 17:41:22.040889 4858 scope.go:117] "RemoveContainer" containerID="56eb9690ba5d9429f6fd5dc4abec27e1209636e93ae64bf088351055e5c08f9e" Feb 02 17:41:22 crc kubenswrapper[4858]: E0202 17:41:22.041313 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56eb9690ba5d9429f6fd5dc4abec27e1209636e93ae64bf088351055e5c08f9e\": container with ID starting with 56eb9690ba5d9429f6fd5dc4abec27e1209636e93ae64bf088351055e5c08f9e not found: ID does not exist" containerID="56eb9690ba5d9429f6fd5dc4abec27e1209636e93ae64bf088351055e5c08f9e" Feb 02 17:41:22 crc kubenswrapper[4858]: I0202 17:41:22.041352 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56eb9690ba5d9429f6fd5dc4abec27e1209636e93ae64bf088351055e5c08f9e"} err="failed to get container status \"56eb9690ba5d9429f6fd5dc4abec27e1209636e93ae64bf088351055e5c08f9e\": rpc error: code = NotFound desc = could not find container \"56eb9690ba5d9429f6fd5dc4abec27e1209636e93ae64bf088351055e5c08f9e\": container with ID starting with 56eb9690ba5d9429f6fd5dc4abec27e1209636e93ae64bf088351055e5c08f9e not found: ID does not exist" Feb 02 17:41:22 crc kubenswrapper[4858]: I0202 17:41:22.041378 4858 scope.go:117] "RemoveContainer" containerID="c24650896a63454526bf0b734adb42783c45de1b4dc76bb66a028d26d9ba7f8c" Feb 02 17:41:22 crc kubenswrapper[4858]: E0202 17:41:22.041710 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c24650896a63454526bf0b734adb42783c45de1b4dc76bb66a028d26d9ba7f8c\": container with ID starting with c24650896a63454526bf0b734adb42783c45de1b4dc76bb66a028d26d9ba7f8c not found: ID does not exist" containerID="c24650896a63454526bf0b734adb42783c45de1b4dc76bb66a028d26d9ba7f8c" Feb 02 17:41:22 crc kubenswrapper[4858]: I0202 17:41:22.041750 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c24650896a63454526bf0b734adb42783c45de1b4dc76bb66a028d26d9ba7f8c"} err="failed to get container status \"c24650896a63454526bf0b734adb42783c45de1b4dc76bb66a028d26d9ba7f8c\": rpc error: code = NotFound desc = could not find container \"c24650896a63454526bf0b734adb42783c45de1b4dc76bb66a028d26d9ba7f8c\": container with ID starting with c24650896a63454526bf0b734adb42783c45de1b4dc76bb66a028d26d9ba7f8c not found: ID does not exist" Feb 02 17:41:22 crc kubenswrapper[4858]: I0202 17:41:22.410276 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7249a924-17cd-4ee6-b7df-709372c90d10" path="/var/lib/kubelet/pods/7249a924-17cd-4ee6-b7df-709372c90d10/volumes" Feb 02 17:41:23 crc kubenswrapper[4858]: I0202 17:41:23.463195 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kqm9l" Feb 02 17:41:23 crc kubenswrapper[4858]: I0202 17:41:23.463230 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kqm9l" Feb 02 17:41:23 crc kubenswrapper[4858]: I0202 17:41:23.530622 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kqm9l" Feb 02 17:41:24 crc kubenswrapper[4858]: I0202 17:41:24.011002 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kqm9l" Feb 02 17:41:25 crc kubenswrapper[4858]: I0202 17:41:25.094360 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kqm9l"] Feb 02 17:41:25 crc kubenswrapper[4858]: I0202 17:41:25.966061 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kqm9l" podUID="544f1bdd-cae9-4c0c-bf4a-7500702f1ee4" containerName="registry-server" containerID="cri-o://cacccca7d8cbd2460da0083d5daf45410831d58db75fd9a9c7a85f2817b81307" gracePeriod=2 Feb 02 17:41:26 crc kubenswrapper[4858]: I0202 17:41:26.457774 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kqm9l" Feb 02 17:41:26 crc kubenswrapper[4858]: I0202 17:41:26.569768 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlgsx\" (UniqueName: \"kubernetes.io/projected/544f1bdd-cae9-4c0c-bf4a-7500702f1ee4-kube-api-access-qlgsx\") pod \"544f1bdd-cae9-4c0c-bf4a-7500702f1ee4\" (UID: \"544f1bdd-cae9-4c0c-bf4a-7500702f1ee4\") " Feb 02 17:41:26 crc kubenswrapper[4858]: I0202 17:41:26.569840 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/544f1bdd-cae9-4c0c-bf4a-7500702f1ee4-utilities\") pod \"544f1bdd-cae9-4c0c-bf4a-7500702f1ee4\" (UID: \"544f1bdd-cae9-4c0c-bf4a-7500702f1ee4\") " Feb 02 17:41:26 crc kubenswrapper[4858]: I0202 17:41:26.569939 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/544f1bdd-cae9-4c0c-bf4a-7500702f1ee4-catalog-content\") pod \"544f1bdd-cae9-4c0c-bf4a-7500702f1ee4\" (UID: \"544f1bdd-cae9-4c0c-bf4a-7500702f1ee4\") " Feb 02 17:41:26 crc kubenswrapper[4858]: I0202 17:41:26.571262 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/544f1bdd-cae9-4c0c-bf4a-7500702f1ee4-utilities" (OuterVolumeSpecName: "utilities") pod "544f1bdd-cae9-4c0c-bf4a-7500702f1ee4" (UID: "544f1bdd-cae9-4c0c-bf4a-7500702f1ee4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:41:26 crc kubenswrapper[4858]: I0202 17:41:26.575782 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/544f1bdd-cae9-4c0c-bf4a-7500702f1ee4-kube-api-access-qlgsx" (OuterVolumeSpecName: "kube-api-access-qlgsx") pod "544f1bdd-cae9-4c0c-bf4a-7500702f1ee4" (UID: "544f1bdd-cae9-4c0c-bf4a-7500702f1ee4"). InnerVolumeSpecName "kube-api-access-qlgsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:41:26 crc kubenswrapper[4858]: I0202 17:41:26.672148 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlgsx\" (UniqueName: \"kubernetes.io/projected/544f1bdd-cae9-4c0c-bf4a-7500702f1ee4-kube-api-access-qlgsx\") on node \"crc\" DevicePath \"\"" Feb 02 17:41:26 crc kubenswrapper[4858]: I0202 17:41:26.672193 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/544f1bdd-cae9-4c0c-bf4a-7500702f1ee4-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 17:41:26 crc kubenswrapper[4858]: I0202 17:41:26.718176 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/544f1bdd-cae9-4c0c-bf4a-7500702f1ee4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "544f1bdd-cae9-4c0c-bf4a-7500702f1ee4" (UID: "544f1bdd-cae9-4c0c-bf4a-7500702f1ee4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:41:26 crc kubenswrapper[4858]: I0202 17:41:26.774504 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/544f1bdd-cae9-4c0c-bf4a-7500702f1ee4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 17:41:26 crc kubenswrapper[4858]: I0202 17:41:26.978695 4858 generic.go:334] "Generic (PLEG): container finished" podID="544f1bdd-cae9-4c0c-bf4a-7500702f1ee4" containerID="cacccca7d8cbd2460da0083d5daf45410831d58db75fd9a9c7a85f2817b81307" exitCode=0 Feb 02 17:41:26 crc kubenswrapper[4858]: I0202 17:41:26.978760 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kqm9l" event={"ID":"544f1bdd-cae9-4c0c-bf4a-7500702f1ee4","Type":"ContainerDied","Data":"cacccca7d8cbd2460da0083d5daf45410831d58db75fd9a9c7a85f2817b81307"} Feb 02 17:41:26 crc kubenswrapper[4858]: I0202 17:41:26.978800 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kqm9l" event={"ID":"544f1bdd-cae9-4c0c-bf4a-7500702f1ee4","Type":"ContainerDied","Data":"27cd8e1e381ee88b4297d9ef7b9311b4976f427254cc403461f1b674c382390a"} Feb 02 17:41:26 crc kubenswrapper[4858]: I0202 17:41:26.978819 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kqm9l" Feb 02 17:41:26 crc kubenswrapper[4858]: I0202 17:41:26.978843 4858 scope.go:117] "RemoveContainer" containerID="cacccca7d8cbd2460da0083d5daf45410831d58db75fd9a9c7a85f2817b81307" Feb 02 17:41:27 crc kubenswrapper[4858]: I0202 17:41:27.015954 4858 scope.go:117] "RemoveContainer" containerID="2a4e468c5b9d23244d310984548ca0e6138892c883b73b13e3c0150644b6855d" Feb 02 17:41:27 crc kubenswrapper[4858]: I0202 17:41:27.023066 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kqm9l"] Feb 02 17:41:27 crc kubenswrapper[4858]: I0202 17:41:27.034524 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kqm9l"] Feb 02 17:41:27 crc kubenswrapper[4858]: I0202 17:41:27.050734 4858 scope.go:117] "RemoveContainer" containerID="9fc96ea15aa79b771b339eaa3f76fd6d214b6fb2929548deb43b63b3230c8166" Feb 02 17:41:27 crc kubenswrapper[4858]: I0202 17:41:27.101821 4858 scope.go:117] "RemoveContainer" containerID="cacccca7d8cbd2460da0083d5daf45410831d58db75fd9a9c7a85f2817b81307" Feb 02 17:41:27 crc kubenswrapper[4858]: E0202 17:41:27.102282 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cacccca7d8cbd2460da0083d5daf45410831d58db75fd9a9c7a85f2817b81307\": container with ID starting with cacccca7d8cbd2460da0083d5daf45410831d58db75fd9a9c7a85f2817b81307 not found: ID does not exist" containerID="cacccca7d8cbd2460da0083d5daf45410831d58db75fd9a9c7a85f2817b81307" Feb 02 17:41:27 crc kubenswrapper[4858]: I0202 17:41:27.102317 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cacccca7d8cbd2460da0083d5daf45410831d58db75fd9a9c7a85f2817b81307"} err="failed to get container status \"cacccca7d8cbd2460da0083d5daf45410831d58db75fd9a9c7a85f2817b81307\": rpc error: code = NotFound desc = could not find container \"cacccca7d8cbd2460da0083d5daf45410831d58db75fd9a9c7a85f2817b81307\": container with ID starting with cacccca7d8cbd2460da0083d5daf45410831d58db75fd9a9c7a85f2817b81307 not found: ID does not exist" Feb 02 17:41:27 crc kubenswrapper[4858]: I0202 17:41:27.102338 4858 scope.go:117] "RemoveContainer" containerID="2a4e468c5b9d23244d310984548ca0e6138892c883b73b13e3c0150644b6855d" Feb 02 17:41:27 crc kubenswrapper[4858]: E0202 17:41:27.102526 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a4e468c5b9d23244d310984548ca0e6138892c883b73b13e3c0150644b6855d\": container with ID starting with 2a4e468c5b9d23244d310984548ca0e6138892c883b73b13e3c0150644b6855d not found: ID does not exist" containerID="2a4e468c5b9d23244d310984548ca0e6138892c883b73b13e3c0150644b6855d" Feb 02 17:41:27 crc kubenswrapper[4858]: I0202 17:41:27.102548 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a4e468c5b9d23244d310984548ca0e6138892c883b73b13e3c0150644b6855d"} err="failed to get container status \"2a4e468c5b9d23244d310984548ca0e6138892c883b73b13e3c0150644b6855d\": rpc error: code = NotFound desc = could not find container \"2a4e468c5b9d23244d310984548ca0e6138892c883b73b13e3c0150644b6855d\": container with ID starting with 2a4e468c5b9d23244d310984548ca0e6138892c883b73b13e3c0150644b6855d not found: ID does not exist" Feb 02 17:41:27 crc kubenswrapper[4858]: I0202 17:41:27.102563 4858 scope.go:117] "RemoveContainer" containerID="9fc96ea15aa79b771b339eaa3f76fd6d214b6fb2929548deb43b63b3230c8166" Feb 02 17:41:27 crc kubenswrapper[4858]: E0202 17:41:27.102842 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fc96ea15aa79b771b339eaa3f76fd6d214b6fb2929548deb43b63b3230c8166\": container with ID starting with 9fc96ea15aa79b771b339eaa3f76fd6d214b6fb2929548deb43b63b3230c8166 not found: ID does not exist" containerID="9fc96ea15aa79b771b339eaa3f76fd6d214b6fb2929548deb43b63b3230c8166" Feb 02 17:41:27 crc kubenswrapper[4858]: I0202 17:41:27.102867 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fc96ea15aa79b771b339eaa3f76fd6d214b6fb2929548deb43b63b3230c8166"} err="failed to get container status \"9fc96ea15aa79b771b339eaa3f76fd6d214b6fb2929548deb43b63b3230c8166\": rpc error: code = NotFound desc = could not find container \"9fc96ea15aa79b771b339eaa3f76fd6d214b6fb2929548deb43b63b3230c8166\": container with ID starting with 9fc96ea15aa79b771b339eaa3f76fd6d214b6fb2929548deb43b63b3230c8166 not found: ID does not exist" Feb 02 17:41:28 crc kubenswrapper[4858]: I0202 17:41:28.416499 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="544f1bdd-cae9-4c0c-bf4a-7500702f1ee4" path="/var/lib/kubelet/pods/544f1bdd-cae9-4c0c-bf4a-7500702f1ee4/volumes" Feb 02 17:41:29 crc kubenswrapper[4858]: I0202 17:41:29.499094 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7wrzq"] Feb 02 17:41:29 crc kubenswrapper[4858]: E0202 17:41:29.499470 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="544f1bdd-cae9-4c0c-bf4a-7500702f1ee4" containerName="registry-server" Feb 02 17:41:29 crc kubenswrapper[4858]: I0202 17:41:29.499482 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="544f1bdd-cae9-4c0c-bf4a-7500702f1ee4" containerName="registry-server" Feb 02 17:41:29 crc kubenswrapper[4858]: E0202 17:41:29.499497 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7249a924-17cd-4ee6-b7df-709372c90d10" containerName="extract-utilities" Feb 02 17:41:29 crc kubenswrapper[4858]: I0202 17:41:29.499502 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7249a924-17cd-4ee6-b7df-709372c90d10" containerName="extract-utilities" Feb 02 17:41:29 crc kubenswrapper[4858]: E0202 17:41:29.499516 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7249a924-17cd-4ee6-b7df-709372c90d10" containerName="extract-content" Feb 02 17:41:29 crc kubenswrapper[4858]: I0202 17:41:29.499523 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7249a924-17cd-4ee6-b7df-709372c90d10" containerName="extract-content" Feb 02 17:41:29 crc kubenswrapper[4858]: E0202 17:41:29.499536 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="544f1bdd-cae9-4c0c-bf4a-7500702f1ee4" containerName="extract-utilities" Feb 02 17:41:29 crc kubenswrapper[4858]: I0202 17:41:29.499542 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="544f1bdd-cae9-4c0c-bf4a-7500702f1ee4" containerName="extract-utilities" Feb 02 17:41:29 crc kubenswrapper[4858]: E0202 17:41:29.499558 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7249a924-17cd-4ee6-b7df-709372c90d10" containerName="registry-server" Feb 02 17:41:29 crc kubenswrapper[4858]: I0202 17:41:29.499565 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7249a924-17cd-4ee6-b7df-709372c90d10" containerName="registry-server" Feb 02 17:41:29 crc kubenswrapper[4858]: E0202 17:41:29.499572 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="544f1bdd-cae9-4c0c-bf4a-7500702f1ee4" containerName="extract-content" Feb 02 17:41:29 crc kubenswrapper[4858]: I0202 17:41:29.499578 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="544f1bdd-cae9-4c0c-bf4a-7500702f1ee4" containerName="extract-content" Feb 02 17:41:29 crc kubenswrapper[4858]: I0202 17:41:29.499746 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="544f1bdd-cae9-4c0c-bf4a-7500702f1ee4" containerName="registry-server" Feb 02 17:41:29 crc kubenswrapper[4858]: I0202 17:41:29.499770 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="7249a924-17cd-4ee6-b7df-709372c90d10" containerName="registry-server" Feb 02 17:41:29 crc kubenswrapper[4858]: I0202 17:41:29.502221 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wrzq" Feb 02 17:41:29 crc kubenswrapper[4858]: I0202 17:41:29.517660 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7wrzq"] Feb 02 17:41:29 crc kubenswrapper[4858]: I0202 17:41:29.627328 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrmhh\" (UniqueName: \"kubernetes.io/projected/0238d131-f28f-40f5-bc15-837989d08933-kube-api-access-wrmhh\") pod \"community-operators-7wrzq\" (UID: \"0238d131-f28f-40f5-bc15-837989d08933\") " pod="openshift-marketplace/community-operators-7wrzq" Feb 02 17:41:29 crc kubenswrapper[4858]: I0202 17:41:29.627405 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0238d131-f28f-40f5-bc15-837989d08933-catalog-content\") pod \"community-operators-7wrzq\" (UID: \"0238d131-f28f-40f5-bc15-837989d08933\") " pod="openshift-marketplace/community-operators-7wrzq" Feb 02 17:41:29 crc kubenswrapper[4858]: I0202 17:41:29.627436 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0238d131-f28f-40f5-bc15-837989d08933-utilities\") pod \"community-operators-7wrzq\" (UID: \"0238d131-f28f-40f5-bc15-837989d08933\") " pod="openshift-marketplace/community-operators-7wrzq" Feb 02 17:41:29 crc kubenswrapper[4858]: I0202 17:41:29.729462 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrmhh\" (UniqueName: \"kubernetes.io/projected/0238d131-f28f-40f5-bc15-837989d08933-kube-api-access-wrmhh\") pod \"community-operators-7wrzq\" (UID: \"0238d131-f28f-40f5-bc15-837989d08933\") " pod="openshift-marketplace/community-operators-7wrzq" Feb 02 17:41:29 crc kubenswrapper[4858]: I0202 17:41:29.729833 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0238d131-f28f-40f5-bc15-837989d08933-catalog-content\") pod \"community-operators-7wrzq\" (UID: \"0238d131-f28f-40f5-bc15-837989d08933\") " pod="openshift-marketplace/community-operators-7wrzq" Feb 02 17:41:29 crc kubenswrapper[4858]: I0202 17:41:29.729854 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0238d131-f28f-40f5-bc15-837989d08933-utilities\") pod \"community-operators-7wrzq\" (UID: \"0238d131-f28f-40f5-bc15-837989d08933\") " pod="openshift-marketplace/community-operators-7wrzq" Feb 02 17:41:29 crc kubenswrapper[4858]: I0202 17:41:29.730418 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0238d131-f28f-40f5-bc15-837989d08933-utilities\") pod \"community-operators-7wrzq\" (UID: \"0238d131-f28f-40f5-bc15-837989d08933\") " pod="openshift-marketplace/community-operators-7wrzq" Feb 02 17:41:29 crc kubenswrapper[4858]: I0202 17:41:29.730887 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0238d131-f28f-40f5-bc15-837989d08933-catalog-content\") pod \"community-operators-7wrzq\" (UID: \"0238d131-f28f-40f5-bc15-837989d08933\") " pod="openshift-marketplace/community-operators-7wrzq" Feb 02 17:41:29 crc kubenswrapper[4858]: I0202 17:41:29.749526 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrmhh\" (UniqueName: \"kubernetes.io/projected/0238d131-f28f-40f5-bc15-837989d08933-kube-api-access-wrmhh\") pod \"community-operators-7wrzq\" (UID: \"0238d131-f28f-40f5-bc15-837989d08933\") " pod="openshift-marketplace/community-operators-7wrzq" Feb 02 17:41:29 crc kubenswrapper[4858]: I0202 17:41:29.825491 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wrzq" Feb 02 17:41:30 crc kubenswrapper[4858]: W0202 17:41:30.445570 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0238d131_f28f_40f5_bc15_837989d08933.slice/crio-71c9185825ffdb1914a3fb8a2964fa9ff5b8fb2014358647c26f796ceb347cb9 WatchSource:0}: Error finding container 71c9185825ffdb1914a3fb8a2964fa9ff5b8fb2014358647c26f796ceb347cb9: Status 404 returned error can't find the container with id 71c9185825ffdb1914a3fb8a2964fa9ff5b8fb2014358647c26f796ceb347cb9 Feb 02 17:41:30 crc kubenswrapper[4858]: I0202 17:41:30.457906 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7wrzq"] Feb 02 17:41:31 crc kubenswrapper[4858]: I0202 17:41:31.055714 4858 generic.go:334] "Generic (PLEG): container finished" podID="0238d131-f28f-40f5-bc15-837989d08933" containerID="be0b08f3e198e1ff21cb27f8c9a925c938914bf5f4738aaa33e034585d7e471f" exitCode=0 Feb 02 17:41:31 crc kubenswrapper[4858]: I0202 17:41:31.055763 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wrzq" event={"ID":"0238d131-f28f-40f5-bc15-837989d08933","Type":"ContainerDied","Data":"be0b08f3e198e1ff21cb27f8c9a925c938914bf5f4738aaa33e034585d7e471f"} Feb 02 17:41:31 crc kubenswrapper[4858]: I0202 17:41:31.055795 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wrzq" event={"ID":"0238d131-f28f-40f5-bc15-837989d08933","Type":"ContainerStarted","Data":"71c9185825ffdb1914a3fb8a2964fa9ff5b8fb2014358647c26f796ceb347cb9"} Feb 02 17:41:33 crc kubenswrapper[4858]: I0202 17:41:33.077253 4858 generic.go:334] "Generic (PLEG): container finished" podID="0238d131-f28f-40f5-bc15-837989d08933" containerID="8e739961cf9cea1911f64a6a76633042329edfd2ac753d46c2c5651e726aaf58" exitCode=0 Feb 02 17:41:33 crc kubenswrapper[4858]: I0202 17:41:33.077339 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wrzq" event={"ID":"0238d131-f28f-40f5-bc15-837989d08933","Type":"ContainerDied","Data":"8e739961cf9cea1911f64a6a76633042329edfd2ac753d46c2c5651e726aaf58"} Feb 02 17:41:36 crc kubenswrapper[4858]: I0202 17:41:36.104250 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wrzq" event={"ID":"0238d131-f28f-40f5-bc15-837989d08933","Type":"ContainerStarted","Data":"b1d9184069881d881b47b4b924b62a8a30738b720bcd5618a2a776e204a637d4"} Feb 02 17:41:36 crc kubenswrapper[4858]: I0202 17:41:36.128907 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7wrzq" podStartSLOduration=3.150920985 podStartE2EDuration="7.128884512s" podCreationTimestamp="2026-02-02 17:41:29 +0000 UTC" firstStartedPulling="2026-02-02 17:41:31.058256656 +0000 UTC m=+1592.210671961" lastFinishedPulling="2026-02-02 17:41:35.036220213 +0000 UTC m=+1596.188635488" observedRunningTime="2026-02-02 17:41:36.119545596 +0000 UTC m=+1597.271960871" watchObservedRunningTime="2026-02-02 17:41:36.128884512 +0000 UTC m=+1597.281299787" Feb 02 17:41:39 crc kubenswrapper[4858]: I0202 17:41:39.827043 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7wrzq" Feb 02 17:41:39 crc kubenswrapper[4858]: I0202 17:41:39.828246 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7wrzq" Feb 02 17:41:39 crc kubenswrapper[4858]: I0202 17:41:39.876751 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7wrzq" Feb 02 17:41:40 crc kubenswrapper[4858]: I0202 17:41:40.041197 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-sgmhl"] Feb 02 17:41:40 crc kubenswrapper[4858]: I0202 17:41:40.050747 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-sgmhl"] Feb 02 17:41:40 crc kubenswrapper[4858]: I0202 17:41:40.186211 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7wrzq" Feb 02 17:41:40 crc kubenswrapper[4858]: I0202 17:41:40.240238 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7wrzq"] Feb 02 17:41:40 crc kubenswrapper[4858]: I0202 17:41:40.411935 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5f00171-8005-4f58-a90a-5f0be6c6a48f" path="/var/lib/kubelet/pods/d5f00171-8005-4f58-a90a-5f0be6c6a48f/volumes" Feb 02 17:41:42 crc kubenswrapper[4858]: I0202 17:41:42.162045 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7wrzq" podUID="0238d131-f28f-40f5-bc15-837989d08933" containerName="registry-server" containerID="cri-o://b1d9184069881d881b47b4b924b62a8a30738b720bcd5618a2a776e204a637d4" gracePeriod=2 Feb 02 17:41:42 crc kubenswrapper[4858]: I0202 17:41:42.638145 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wrzq" Feb 02 17:41:42 crc kubenswrapper[4858]: I0202 17:41:42.807321 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0238d131-f28f-40f5-bc15-837989d08933-utilities\") pod \"0238d131-f28f-40f5-bc15-837989d08933\" (UID: \"0238d131-f28f-40f5-bc15-837989d08933\") " Feb 02 17:41:42 crc kubenswrapper[4858]: I0202 17:41:42.807601 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrmhh\" (UniqueName: \"kubernetes.io/projected/0238d131-f28f-40f5-bc15-837989d08933-kube-api-access-wrmhh\") pod \"0238d131-f28f-40f5-bc15-837989d08933\" (UID: \"0238d131-f28f-40f5-bc15-837989d08933\") " Feb 02 17:41:42 crc kubenswrapper[4858]: I0202 17:41:42.807701 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0238d131-f28f-40f5-bc15-837989d08933-catalog-content\") pod \"0238d131-f28f-40f5-bc15-837989d08933\" (UID: \"0238d131-f28f-40f5-bc15-837989d08933\") " Feb 02 17:41:42 crc kubenswrapper[4858]: I0202 17:41:42.811301 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0238d131-f28f-40f5-bc15-837989d08933-utilities" (OuterVolumeSpecName: "utilities") pod "0238d131-f28f-40f5-bc15-837989d08933" (UID: "0238d131-f28f-40f5-bc15-837989d08933"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:41:42 crc kubenswrapper[4858]: I0202 17:41:42.833328 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0238d131-f28f-40f5-bc15-837989d08933-kube-api-access-wrmhh" (OuterVolumeSpecName: "kube-api-access-wrmhh") pod "0238d131-f28f-40f5-bc15-837989d08933" (UID: "0238d131-f28f-40f5-bc15-837989d08933"). InnerVolumeSpecName "kube-api-access-wrmhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:41:42 crc kubenswrapper[4858]: I0202 17:41:42.909594 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrmhh\" (UniqueName: \"kubernetes.io/projected/0238d131-f28f-40f5-bc15-837989d08933-kube-api-access-wrmhh\") on node \"crc\" DevicePath \"\"" Feb 02 17:41:42 crc kubenswrapper[4858]: I0202 17:41:42.909635 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0238d131-f28f-40f5-bc15-837989d08933-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 17:41:43 crc kubenswrapper[4858]: I0202 17:41:43.047542 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0238d131-f28f-40f5-bc15-837989d08933-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0238d131-f28f-40f5-bc15-837989d08933" (UID: "0238d131-f28f-40f5-bc15-837989d08933"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:41:43 crc kubenswrapper[4858]: I0202 17:41:43.112890 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0238d131-f28f-40f5-bc15-837989d08933-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 17:41:43 crc kubenswrapper[4858]: I0202 17:41:43.175705 4858 generic.go:334] "Generic (PLEG): container finished" podID="0238d131-f28f-40f5-bc15-837989d08933" containerID="b1d9184069881d881b47b4b924b62a8a30738b720bcd5618a2a776e204a637d4" exitCode=0 Feb 02 17:41:43 crc kubenswrapper[4858]: I0202 17:41:43.175754 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wrzq" Feb 02 17:41:43 crc kubenswrapper[4858]: I0202 17:41:43.175761 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wrzq" event={"ID":"0238d131-f28f-40f5-bc15-837989d08933","Type":"ContainerDied","Data":"b1d9184069881d881b47b4b924b62a8a30738b720bcd5618a2a776e204a637d4"} Feb 02 17:41:43 crc kubenswrapper[4858]: I0202 17:41:43.175792 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wrzq" event={"ID":"0238d131-f28f-40f5-bc15-837989d08933","Type":"ContainerDied","Data":"71c9185825ffdb1914a3fb8a2964fa9ff5b8fb2014358647c26f796ceb347cb9"} Feb 02 17:41:43 crc kubenswrapper[4858]: I0202 17:41:43.175814 4858 scope.go:117] "RemoveContainer" containerID="b1d9184069881d881b47b4b924b62a8a30738b720bcd5618a2a776e204a637d4" Feb 02 17:41:43 crc kubenswrapper[4858]: I0202 17:41:43.196965 4858 scope.go:117] "RemoveContainer" containerID="8e739961cf9cea1911f64a6a76633042329edfd2ac753d46c2c5651e726aaf58" Feb 02 17:41:43 crc kubenswrapper[4858]: I0202 17:41:43.213665 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7wrzq"] Feb 02 17:41:43 crc kubenswrapper[4858]: I0202 17:41:43.222013 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7wrzq"] Feb 02 17:41:43 crc kubenswrapper[4858]: I0202 17:41:43.232036 4858 scope.go:117] "RemoveContainer" containerID="be0b08f3e198e1ff21cb27f8c9a925c938914bf5f4738aaa33e034585d7e471f" Feb 02 17:41:43 crc kubenswrapper[4858]: I0202 17:41:43.266473 4858 scope.go:117] "RemoveContainer" containerID="b1d9184069881d881b47b4b924b62a8a30738b720bcd5618a2a776e204a637d4" Feb 02 17:41:43 crc kubenswrapper[4858]: E0202 17:41:43.266969 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1d9184069881d881b47b4b924b62a8a30738b720bcd5618a2a776e204a637d4\": container with ID starting with b1d9184069881d881b47b4b924b62a8a30738b720bcd5618a2a776e204a637d4 not found: ID does not exist" containerID="b1d9184069881d881b47b4b924b62a8a30738b720bcd5618a2a776e204a637d4" Feb 02 17:41:43 crc kubenswrapper[4858]: I0202 17:41:43.267028 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1d9184069881d881b47b4b924b62a8a30738b720bcd5618a2a776e204a637d4"} err="failed to get container status \"b1d9184069881d881b47b4b924b62a8a30738b720bcd5618a2a776e204a637d4\": rpc error: code = NotFound desc = could not find container \"b1d9184069881d881b47b4b924b62a8a30738b720bcd5618a2a776e204a637d4\": container with ID starting with b1d9184069881d881b47b4b924b62a8a30738b720bcd5618a2a776e204a637d4 not found: ID does not exist" Feb 02 17:41:43 crc kubenswrapper[4858]: I0202 17:41:43.267049 4858 scope.go:117] "RemoveContainer" containerID="8e739961cf9cea1911f64a6a76633042329edfd2ac753d46c2c5651e726aaf58" Feb 02 17:41:43 crc kubenswrapper[4858]: E0202 17:41:43.267383 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e739961cf9cea1911f64a6a76633042329edfd2ac753d46c2c5651e726aaf58\": container with ID starting with 8e739961cf9cea1911f64a6a76633042329edfd2ac753d46c2c5651e726aaf58 not found: ID does not exist" containerID="8e739961cf9cea1911f64a6a76633042329edfd2ac753d46c2c5651e726aaf58" Feb 02 17:41:43 crc kubenswrapper[4858]: I0202 17:41:43.267419 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e739961cf9cea1911f64a6a76633042329edfd2ac753d46c2c5651e726aaf58"} err="failed to get container status \"8e739961cf9cea1911f64a6a76633042329edfd2ac753d46c2c5651e726aaf58\": rpc error: code = NotFound desc = could not find container \"8e739961cf9cea1911f64a6a76633042329edfd2ac753d46c2c5651e726aaf58\": container with ID starting with 8e739961cf9cea1911f64a6a76633042329edfd2ac753d46c2c5651e726aaf58 not found: ID does not exist" Feb 02 17:41:43 crc kubenswrapper[4858]: I0202 17:41:43.267442 4858 scope.go:117] "RemoveContainer" containerID="be0b08f3e198e1ff21cb27f8c9a925c938914bf5f4738aaa33e034585d7e471f" Feb 02 17:41:43 crc kubenswrapper[4858]: E0202 17:41:43.268018 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be0b08f3e198e1ff21cb27f8c9a925c938914bf5f4738aaa33e034585d7e471f\": container with ID starting with be0b08f3e198e1ff21cb27f8c9a925c938914bf5f4738aaa33e034585d7e471f not found: ID does not exist" containerID="be0b08f3e198e1ff21cb27f8c9a925c938914bf5f4738aaa33e034585d7e471f" Feb 02 17:41:43 crc kubenswrapper[4858]: I0202 17:41:43.268068 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be0b08f3e198e1ff21cb27f8c9a925c938914bf5f4738aaa33e034585d7e471f"} err="failed to get container status \"be0b08f3e198e1ff21cb27f8c9a925c938914bf5f4738aaa33e034585d7e471f\": rpc error: code = NotFound desc = could not find container \"be0b08f3e198e1ff21cb27f8c9a925c938914bf5f4738aaa33e034585d7e471f\": container with ID starting with be0b08f3e198e1ff21cb27f8c9a925c938914bf5f4738aaa33e034585d7e471f not found: ID does not exist" Feb 02 17:41:44 crc kubenswrapper[4858]: I0202 17:41:44.411613 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0238d131-f28f-40f5-bc15-837989d08933" path="/var/lib/kubelet/pods/0238d131-f28f-40f5-bc15-837989d08933/volumes" Feb 02 17:41:52 crc kubenswrapper[4858]: I0202 17:41:52.042309 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-5d29t"] Feb 02 17:41:52 crc kubenswrapper[4858]: I0202 17:41:52.053097 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-gfbpg"] Feb 02 17:41:52 crc kubenswrapper[4858]: I0202 17:41:52.071165 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-5d29t"] Feb 02 17:41:52 crc kubenswrapper[4858]: I0202 17:41:52.080441 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-gfbpg"] Feb 02 17:41:52 crc kubenswrapper[4858]: I0202 17:41:52.410661 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="368436df-491c-4059-92f9-16993b192d76" path="/var/lib/kubelet/pods/368436df-491c-4059-92f9-16993b192d76/volumes" Feb 02 17:41:52 crc kubenswrapper[4858]: I0202 17:41:52.411552 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1355b1c-35d4-42d0-8780-7e01dd0b7a8d" path="/var/lib/kubelet/pods/e1355b1c-35d4-42d0-8780-7e01dd0b7a8d/volumes" Feb 02 17:41:53 crc kubenswrapper[4858]: I0202 17:41:53.031719 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-mdq86"] Feb 02 17:41:53 crc kubenswrapper[4858]: I0202 17:41:53.038987 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-mdq86"] Feb 02 17:41:54 crc kubenswrapper[4858]: I0202 17:41:54.413818 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56da7ca5-acf2-4372-9e48-20b829275727" path="/var/lib/kubelet/pods/56da7ca5-acf2-4372-9e48-20b829275727/volumes" Feb 02 17:41:57 crc kubenswrapper[4858]: I0202 17:41:57.807567 4858 patch_prober.go:28] interesting pod/machine-config-daemon-lbvl2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 17:41:57 crc kubenswrapper[4858]: I0202 17:41:57.808176 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 17:42:08 crc kubenswrapper[4858]: I0202 17:42:08.053594 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-6xt8q"] Feb 02 17:42:08 crc kubenswrapper[4858]: I0202 17:42:08.064508 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-6xt8q"] Feb 02 17:42:08 crc kubenswrapper[4858]: I0202 17:42:08.415329 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5a9fadc-338f-44bb-8ebd-bc4fe01972bf" path="/var/lib/kubelet/pods/d5a9fadc-338f-44bb-8ebd-bc4fe01972bf/volumes" Feb 02 17:42:10 crc kubenswrapper[4858]: I0202 17:42:10.056787 4858 scope.go:117] "RemoveContainer" containerID="6b334ecf4231d51ee8d95f3958353865b97e1a74fff41018a34850fd7f7761f7" Feb 02 17:42:10 crc kubenswrapper[4858]: I0202 17:42:10.101519 4858 scope.go:117] "RemoveContainer" containerID="ad3d10a4a57038497de79a193804b1dd2faaed6dba90621eadb67952f0f1019a" Feb 02 17:42:10 crc kubenswrapper[4858]: I0202 17:42:10.152463 4858 scope.go:117] "RemoveContainer" containerID="6ba4e0eedf199bd7f5ace80d1079a59541293083f98002024bc39aed9ad830c8" Feb 02 17:42:10 crc kubenswrapper[4858]: I0202 17:42:10.195786 4858 scope.go:117] "RemoveContainer" containerID="70750dcfaf02200e8f69ffbc0988d57dd4837336eb2815c2b955cc860a72d598" Feb 02 17:42:10 crc kubenswrapper[4858]: I0202 17:42:10.247338 4858 scope.go:117] "RemoveContainer" containerID="0dd7abadbe4ebe5d95892a5fcbf84962471de4be3deb29b052c9ba7afbec7b2c" Feb 02 17:42:10 crc kubenswrapper[4858]: I0202 17:42:10.296879 4858 scope.go:117] "RemoveContainer" containerID="2de5a1bd3a836ccf25ef75da4703f5354db68ef3c1d464e0a233a767be0e9bcf" Feb 02 17:42:23 crc kubenswrapper[4858]: I0202 17:42:23.563848 4858 generic.go:334] "Generic (PLEG): container finished" podID="07f60796-9efa-4245-955f-14c0c16c918d" containerID="24dad98c8692b4b2e215f9751f8ef05ee7451e38ec062be21e31788387171964" exitCode=0 Feb 02 17:42:23 crc kubenswrapper[4858]: I0202 17:42:23.563924 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6h2qq" event={"ID":"07f60796-9efa-4245-955f-14c0c16c918d","Type":"ContainerDied","Data":"24dad98c8692b4b2e215f9751f8ef05ee7451e38ec062be21e31788387171964"} Feb 02 17:42:25 crc kubenswrapper[4858]: I0202 17:42:25.002475 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6h2qq" Feb 02 17:42:25 crc kubenswrapper[4858]: I0202 17:42:25.166259 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07f60796-9efa-4245-955f-14c0c16c918d-ssh-key-openstack-edpm-ipam\") pod \"07f60796-9efa-4245-955f-14c0c16c918d\" (UID: \"07f60796-9efa-4245-955f-14c0c16c918d\") " Feb 02 17:42:25 crc kubenswrapper[4858]: I0202 17:42:25.166660 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07f60796-9efa-4245-955f-14c0c16c918d-inventory\") pod \"07f60796-9efa-4245-955f-14c0c16c918d\" (UID: \"07f60796-9efa-4245-955f-14c0c16c918d\") " Feb 02 17:42:25 crc kubenswrapper[4858]: I0202 17:42:25.166733 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmq76\" (UniqueName: \"kubernetes.io/projected/07f60796-9efa-4245-955f-14c0c16c918d-kube-api-access-xmq76\") pod \"07f60796-9efa-4245-955f-14c0c16c918d\" (UID: \"07f60796-9efa-4245-955f-14c0c16c918d\") " Feb 02 17:42:25 crc kubenswrapper[4858]: I0202 17:42:25.172033 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07f60796-9efa-4245-955f-14c0c16c918d-kube-api-access-xmq76" (OuterVolumeSpecName: "kube-api-access-xmq76") pod "07f60796-9efa-4245-955f-14c0c16c918d" (UID: "07f60796-9efa-4245-955f-14c0c16c918d"). InnerVolumeSpecName "kube-api-access-xmq76". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:42:25 crc kubenswrapper[4858]: I0202 17:42:25.196851 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07f60796-9efa-4245-955f-14c0c16c918d-inventory" (OuterVolumeSpecName: "inventory") pod "07f60796-9efa-4245-955f-14c0c16c918d" (UID: "07f60796-9efa-4245-955f-14c0c16c918d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:42:25 crc kubenswrapper[4858]: I0202 17:42:25.205087 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07f60796-9efa-4245-955f-14c0c16c918d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "07f60796-9efa-4245-955f-14c0c16c918d" (UID: "07f60796-9efa-4245-955f-14c0c16c918d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:42:25 crc kubenswrapper[4858]: I0202 17:42:25.269893 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07f60796-9efa-4245-955f-14c0c16c918d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 17:42:25 crc kubenswrapper[4858]: I0202 17:42:25.269922 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07f60796-9efa-4245-955f-14c0c16c918d-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 17:42:25 crc kubenswrapper[4858]: I0202 17:42:25.269931 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmq76\" (UniqueName: \"kubernetes.io/projected/07f60796-9efa-4245-955f-14c0c16c918d-kube-api-access-xmq76\") on node \"crc\" DevicePath \"\"" Feb 02 17:42:25 crc kubenswrapper[4858]: I0202 17:42:25.582605 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6h2qq" event={"ID":"07f60796-9efa-4245-955f-14c0c16c918d","Type":"ContainerDied","Data":"eb791fd76451e501e0c6c3296ff2a4a3301becb17f21a52cafabe37d736824d8"} Feb 02 17:42:25 crc kubenswrapper[4858]: I0202 17:42:25.582648 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb791fd76451e501e0c6c3296ff2a4a3301becb17f21a52cafabe37d736824d8" Feb 02 17:42:25 crc kubenswrapper[4858]: I0202 17:42:25.582724 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6h2qq" Feb 02 17:42:25 crc kubenswrapper[4858]: I0202 17:42:25.673272 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9g4rw"] Feb 02 17:42:25 crc kubenswrapper[4858]: E0202 17:42:25.673721 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07f60796-9efa-4245-955f-14c0c16c918d" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 02 17:42:25 crc kubenswrapper[4858]: I0202 17:42:25.673748 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="07f60796-9efa-4245-955f-14c0c16c918d" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 02 17:42:25 crc kubenswrapper[4858]: E0202 17:42:25.673783 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0238d131-f28f-40f5-bc15-837989d08933" containerName="registry-server" Feb 02 17:42:25 crc kubenswrapper[4858]: I0202 17:42:25.673792 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0238d131-f28f-40f5-bc15-837989d08933" containerName="registry-server" Feb 02 17:42:25 crc kubenswrapper[4858]: E0202 17:42:25.673804 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0238d131-f28f-40f5-bc15-837989d08933" containerName="extract-utilities" Feb 02 17:42:25 crc kubenswrapper[4858]: I0202 17:42:25.673812 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0238d131-f28f-40f5-bc15-837989d08933" containerName="extract-utilities" Feb 02 17:42:25 crc kubenswrapper[4858]: E0202 17:42:25.673827 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0238d131-f28f-40f5-bc15-837989d08933" containerName="extract-content" Feb 02 17:42:25 crc kubenswrapper[4858]: I0202 17:42:25.673835 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0238d131-f28f-40f5-bc15-837989d08933" containerName="extract-content" Feb 02 17:42:25 crc kubenswrapper[4858]: I0202 17:42:25.674112 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="07f60796-9efa-4245-955f-14c0c16c918d" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 02 17:42:25 crc kubenswrapper[4858]: I0202 17:42:25.674130 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0238d131-f28f-40f5-bc15-837989d08933" containerName="registry-server" Feb 02 17:42:25 crc kubenswrapper[4858]: I0202 17:42:25.674914 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9g4rw" Feb 02 17:42:25 crc kubenswrapper[4858]: I0202 17:42:25.676871 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9g4rw\" (UID: \"9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9g4rw" Feb 02 17:42:25 crc kubenswrapper[4858]: I0202 17:42:25.676945 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg479\" (UniqueName: \"kubernetes.io/projected/9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43-kube-api-access-cg479\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9g4rw\" (UID: \"9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9g4rw" Feb 02 17:42:25 crc kubenswrapper[4858]: I0202 17:42:25.676985 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9g4rw\" (UID: \"9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9g4rw" Feb 02 17:42:25 crc kubenswrapper[4858]: I0202 17:42:25.682526 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9g4rw"] Feb 02 17:42:25 crc kubenswrapper[4858]: I0202 17:42:25.683609 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 17:42:25 crc kubenswrapper[4858]: I0202 17:42:25.683732 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 17:42:25 crc kubenswrapper[4858]: I0202 17:42:25.684084 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q7l94" Feb 02 17:42:25 crc kubenswrapper[4858]: I0202 17:42:25.684296 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 17:42:25 crc kubenswrapper[4858]: I0202 17:42:25.778795 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg479\" (UniqueName: \"kubernetes.io/projected/9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43-kube-api-access-cg479\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9g4rw\" (UID: \"9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9g4rw" Feb 02 17:42:25 crc kubenswrapper[4858]: I0202 17:42:25.778871 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9g4rw\" (UID: \"9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9g4rw" Feb 02 17:42:25 crc kubenswrapper[4858]: I0202 17:42:25.779045 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9g4rw\" (UID: \"9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9g4rw" Feb 02 17:42:25 crc kubenswrapper[4858]: I0202 17:42:25.783712 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9g4rw\" (UID: \"9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9g4rw" Feb 02 17:42:25 crc kubenswrapper[4858]: I0202 17:42:25.784466 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9g4rw\" (UID: \"9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9g4rw" Feb 02 17:42:25 crc kubenswrapper[4858]: I0202 17:42:25.799729 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg479\" (UniqueName: \"kubernetes.io/projected/9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43-kube-api-access-cg479\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9g4rw\" (UID: \"9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9g4rw" Feb 02 17:42:25 crc kubenswrapper[4858]: I0202 17:42:25.999279 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9g4rw" Feb 02 17:42:26 crc kubenswrapper[4858]: I0202 17:42:26.554698 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9g4rw"] Feb 02 17:42:26 crc kubenswrapper[4858]: I0202 17:42:26.595048 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9g4rw" event={"ID":"9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43","Type":"ContainerStarted","Data":"c95a66922bde6a923d4cba4b816f605d6af6d8b43e2bf16e250fd6b2c05bfc1a"} Feb 02 17:42:27 crc kubenswrapper[4858]: I0202 17:42:27.604641 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9g4rw" event={"ID":"9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43","Type":"ContainerStarted","Data":"f313338f148c92d8a92aa15b1c5318f852e4b0d276a0ddee68082f95730ea685"} Feb 02 17:42:27 crc kubenswrapper[4858]: I0202 17:42:27.617756 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9g4rw" podStartSLOduration=1.847263518 podStartE2EDuration="2.617736519s" podCreationTimestamp="2026-02-02 17:42:25 +0000 UTC" firstStartedPulling="2026-02-02 17:42:26.559850346 +0000 UTC m=+1647.712265621" lastFinishedPulling="2026-02-02 17:42:27.330323347 +0000 UTC m=+1648.482738622" observedRunningTime="2026-02-02 17:42:27.616686889 +0000 UTC m=+1648.769102174" watchObservedRunningTime="2026-02-02 17:42:27.617736519 +0000 UTC m=+1648.770151794" Feb 02 17:42:27 crc kubenswrapper[4858]: I0202 17:42:27.808061 4858 patch_prober.go:28] interesting pod/machine-config-daemon-lbvl2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 17:42:27 crc kubenswrapper[4858]: I0202 17:42:27.808110 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 17:42:32 crc kubenswrapper[4858]: I0202 17:42:32.654190 4858 generic.go:334] "Generic (PLEG): container finished" podID="9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43" containerID="f313338f148c92d8a92aa15b1c5318f852e4b0d276a0ddee68082f95730ea685" exitCode=0 Feb 02 17:42:32 crc kubenswrapper[4858]: I0202 17:42:32.654252 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9g4rw" event={"ID":"9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43","Type":"ContainerDied","Data":"f313338f148c92d8a92aa15b1c5318f852e4b0d276a0ddee68082f95730ea685"} Feb 02 17:42:34 crc kubenswrapper[4858]: I0202 17:42:34.102531 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9g4rw" Feb 02 17:42:34 crc kubenswrapper[4858]: I0202 17:42:34.233546 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cg479\" (UniqueName: \"kubernetes.io/projected/9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43-kube-api-access-cg479\") pod \"9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43\" (UID: \"9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43\") " Feb 02 17:42:34 crc kubenswrapper[4858]: I0202 17:42:34.233596 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43-inventory\") pod \"9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43\" (UID: \"9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43\") " Feb 02 17:42:34 crc kubenswrapper[4858]: I0202 17:42:34.233655 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43-ssh-key-openstack-edpm-ipam\") pod \"9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43\" (UID: \"9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43\") " Feb 02 17:42:34 crc kubenswrapper[4858]: I0202 17:42:34.246228 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43-kube-api-access-cg479" (OuterVolumeSpecName: "kube-api-access-cg479") pod "9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43" (UID: "9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43"). InnerVolumeSpecName "kube-api-access-cg479". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:42:34 crc kubenswrapper[4858]: I0202 17:42:34.282230 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43" (UID: "9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:42:34 crc kubenswrapper[4858]: I0202 17:42:34.285173 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43-inventory" (OuterVolumeSpecName: "inventory") pod "9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43" (UID: "9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:42:34 crc kubenswrapper[4858]: I0202 17:42:34.336659 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cg479\" (UniqueName: \"kubernetes.io/projected/9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43-kube-api-access-cg479\") on node \"crc\" DevicePath \"\"" Feb 02 17:42:34 crc kubenswrapper[4858]: I0202 17:42:34.336701 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 17:42:34 crc kubenswrapper[4858]: I0202 17:42:34.336714 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 17:42:34 crc kubenswrapper[4858]: I0202 17:42:34.676251 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9g4rw" event={"ID":"9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43","Type":"ContainerDied","Data":"c95a66922bde6a923d4cba4b816f605d6af6d8b43e2bf16e250fd6b2c05bfc1a"} Feb 02 17:42:34 crc kubenswrapper[4858]: I0202 17:42:34.676322 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c95a66922bde6a923d4cba4b816f605d6af6d8b43e2bf16e250fd6b2c05bfc1a" Feb 02 17:42:34 crc kubenswrapper[4858]: I0202 17:42:34.676367 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9g4rw" Feb 02 17:42:34 crc kubenswrapper[4858]: I0202 17:42:34.749775 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-65b4t"] Feb 02 17:42:34 crc kubenswrapper[4858]: E0202 17:42:34.750243 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 02 17:42:34 crc kubenswrapper[4858]: I0202 17:42:34.750263 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 02 17:42:34 crc kubenswrapper[4858]: I0202 17:42:34.750461 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 02 17:42:34 crc kubenswrapper[4858]: I0202 17:42:34.751223 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-65b4t" Feb 02 17:42:34 crc kubenswrapper[4858]: I0202 17:42:34.753642 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q7l94" Feb 02 17:42:34 crc kubenswrapper[4858]: I0202 17:42:34.753701 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 17:42:34 crc kubenswrapper[4858]: I0202 17:42:34.753646 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 17:42:34 crc kubenswrapper[4858]: I0202 17:42:34.753966 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 17:42:34 crc kubenswrapper[4858]: I0202 17:42:34.760268 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-65b4t"] Feb 02 17:42:34 crc kubenswrapper[4858]: I0202 17:42:34.847709 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbcce266-9b8e-489e-935d-17695dd8cf62-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-65b4t\" (UID: \"dbcce266-9b8e-489e-935d-17695dd8cf62\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-65b4t" Feb 02 17:42:34 crc kubenswrapper[4858]: I0202 17:42:34.847785 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbcce266-9b8e-489e-935d-17695dd8cf62-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-65b4t\" (UID: \"dbcce266-9b8e-489e-935d-17695dd8cf62\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-65b4t" Feb 02 17:42:34 crc kubenswrapper[4858]: I0202 17:42:34.847828 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb226\" (UniqueName: \"kubernetes.io/projected/dbcce266-9b8e-489e-935d-17695dd8cf62-kube-api-access-xb226\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-65b4t\" (UID: \"dbcce266-9b8e-489e-935d-17695dd8cf62\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-65b4t" Feb 02 17:42:34 crc kubenswrapper[4858]: I0202 17:42:34.949768 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbcce266-9b8e-489e-935d-17695dd8cf62-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-65b4t\" (UID: \"dbcce266-9b8e-489e-935d-17695dd8cf62\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-65b4t" Feb 02 17:42:34 crc kubenswrapper[4858]: I0202 17:42:34.949827 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbcce266-9b8e-489e-935d-17695dd8cf62-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-65b4t\" (UID: \"dbcce266-9b8e-489e-935d-17695dd8cf62\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-65b4t" Feb 02 17:42:34 crc kubenswrapper[4858]: I0202 17:42:34.949868 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb226\" (UniqueName: \"kubernetes.io/projected/dbcce266-9b8e-489e-935d-17695dd8cf62-kube-api-access-xb226\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-65b4t\" (UID: \"dbcce266-9b8e-489e-935d-17695dd8cf62\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-65b4t" Feb 02 17:42:34 crc kubenswrapper[4858]: I0202 17:42:34.954390 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbcce266-9b8e-489e-935d-17695dd8cf62-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-65b4t\" (UID: \"dbcce266-9b8e-489e-935d-17695dd8cf62\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-65b4t" Feb 02 17:42:34 crc kubenswrapper[4858]: I0202 17:42:34.955326 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbcce266-9b8e-489e-935d-17695dd8cf62-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-65b4t\" (UID: \"dbcce266-9b8e-489e-935d-17695dd8cf62\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-65b4t" Feb 02 17:42:34 crc kubenswrapper[4858]: I0202 17:42:34.967630 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb226\" (UniqueName: \"kubernetes.io/projected/dbcce266-9b8e-489e-935d-17695dd8cf62-kube-api-access-xb226\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-65b4t\" (UID: \"dbcce266-9b8e-489e-935d-17695dd8cf62\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-65b4t" Feb 02 17:42:35 crc kubenswrapper[4858]: I0202 17:42:35.069249 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-65b4t" Feb 02 17:42:35 crc kubenswrapper[4858]: I0202 17:42:35.593281 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-65b4t"] Feb 02 17:42:35 crc kubenswrapper[4858]: I0202 17:42:35.686453 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-65b4t" event={"ID":"dbcce266-9b8e-489e-935d-17695dd8cf62","Type":"ContainerStarted","Data":"1acdacb121bfdef81c7f401e91ea0bc2a182e62f75481a141c0260a8b3fdfde1"} Feb 02 17:42:36 crc kubenswrapper[4858]: I0202 17:42:36.696480 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-65b4t" event={"ID":"dbcce266-9b8e-489e-935d-17695dd8cf62","Type":"ContainerStarted","Data":"130bc93f6dc5982c724d0ddce200d9151df82b9c27bff4b07bfd9e4d5361b730"} Feb 02 17:42:36 crc kubenswrapper[4858]: I0202 17:42:36.719012 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-65b4t" podStartSLOduration=2.197085229 podStartE2EDuration="2.718945389s" podCreationTimestamp="2026-02-02 17:42:34 +0000 UTC" firstStartedPulling="2026-02-02 17:42:35.60167984 +0000 UTC m=+1656.754095115" lastFinishedPulling="2026-02-02 17:42:36.12354001 +0000 UTC m=+1657.275955275" observedRunningTime="2026-02-02 17:42:36.716339425 +0000 UTC m=+1657.868754780" watchObservedRunningTime="2026-02-02 17:42:36.718945389 +0000 UTC m=+1657.871360684" Feb 02 17:42:44 crc kubenswrapper[4858]: I0202 17:42:44.051127 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-cqkx4"] Feb 02 17:42:44 crc kubenswrapper[4858]: I0202 17:42:44.062849 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-4698-account-create-update-lnpzg"] Feb 02 17:42:44 crc kubenswrapper[4858]: I0202 17:42:44.076770 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-cpn8d"] Feb 02 17:42:44 crc kubenswrapper[4858]: I0202 17:42:44.083850 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-45eb-account-create-update-hxbqj"] Feb 02 17:42:44 crc kubenswrapper[4858]: I0202 17:42:44.091651 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-ce0f-account-create-update-cg8px"] Feb 02 17:42:44 crc kubenswrapper[4858]: I0202 17:42:44.098436 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-4698-account-create-update-lnpzg"] Feb 02 17:42:44 crc kubenswrapper[4858]: I0202 17:42:44.105032 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-cpn8d"] Feb 02 17:42:44 crc kubenswrapper[4858]: I0202 17:42:44.112360 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-ce0f-account-create-update-cg8px"] Feb 02 17:42:44 crc kubenswrapper[4858]: I0202 17:42:44.119252 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-45eb-account-create-update-hxbqj"] Feb 02 17:42:44 crc kubenswrapper[4858]: I0202 17:42:44.126506 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-67p9g"] Feb 02 17:42:44 crc kubenswrapper[4858]: I0202 17:42:44.145950 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-cqkx4"] Feb 02 17:42:44 crc kubenswrapper[4858]: I0202 17:42:44.158870 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-67p9g"] Feb 02 17:42:44 crc kubenswrapper[4858]: I0202 17:42:44.410221 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51908721-b3a6-4ecb-b0bc-041a43ecba5e" path="/var/lib/kubelet/pods/51908721-b3a6-4ecb-b0bc-041a43ecba5e/volumes" Feb 02 17:42:44 crc kubenswrapper[4858]: I0202 17:42:44.411070 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e804b92-5b91-414c-ab96-2c679b264a85" path="/var/lib/kubelet/pods/9e804b92-5b91-414c-ab96-2c679b264a85/volumes" Feb 02 17:42:44 crc kubenswrapper[4858]: I0202 17:42:44.411703 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3e75f5b-d129-4f48-b69c-35f0fd329c2b" path="/var/lib/kubelet/pods/b3e75f5b-d129-4f48-b69c-35f0fd329c2b/volumes" Feb 02 17:42:44 crc kubenswrapper[4858]: I0202 17:42:44.412295 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0fd6b61-532c-4002-bc57-c692aa8255f2" path="/var/lib/kubelet/pods/d0fd6b61-532c-4002-bc57-c692aa8255f2/volumes" Feb 02 17:42:44 crc kubenswrapper[4858]: I0202 17:42:44.413358 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e58dfef0-aeb5-4f3d-bf54-f4c51cf88901" path="/var/lib/kubelet/pods/e58dfef0-aeb5-4f3d-bf54-f4c51cf88901/volumes" Feb 02 17:42:44 crc kubenswrapper[4858]: I0202 17:42:44.414044 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6858894-d212-4bb0-a6dc-5e7633b29b58" path="/var/lib/kubelet/pods/e6858894-d212-4bb0-a6dc-5e7633b29b58/volumes" Feb 02 17:42:57 crc kubenswrapper[4858]: I0202 17:42:57.807511 4858 patch_prober.go:28] interesting pod/machine-config-daemon-lbvl2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 17:42:57 crc kubenswrapper[4858]: I0202 17:42:57.809694 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 17:42:57 crc kubenswrapper[4858]: I0202 17:42:57.809908 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" Feb 02 17:42:57 crc kubenswrapper[4858]: I0202 17:42:57.810939 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"74530cf9b9be874cadd1d92f12bf70a28231d482712b7e73c15b0838788189fc"} pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 17:42:57 crc kubenswrapper[4858]: I0202 17:42:57.811160 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" containerID="cri-o://74530cf9b9be874cadd1d92f12bf70a28231d482712b7e73c15b0838788189fc" gracePeriod=600 Feb 02 17:42:57 crc kubenswrapper[4858]: E0202 17:42:57.959206 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:42:58 crc kubenswrapper[4858]: I0202 17:42:58.917273 4858 generic.go:334] "Generic (PLEG): container finished" podID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerID="74530cf9b9be874cadd1d92f12bf70a28231d482712b7e73c15b0838788189fc" exitCode=0 Feb 02 17:42:58 crc kubenswrapper[4858]: I0202 17:42:58.917333 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" event={"ID":"d03a4872-ca6a-4233-bdbf-b31f7890dc3e","Type":"ContainerDied","Data":"74530cf9b9be874cadd1d92f12bf70a28231d482712b7e73c15b0838788189fc"} Feb 02 17:42:58 crc kubenswrapper[4858]: I0202 17:42:58.917378 4858 scope.go:117] "RemoveContainer" containerID="a72185f737bdc7412fd3cbd9bb1c4c3b6a039e9c69f0ba35d250e546f8d8b6ce" Feb 02 17:42:58 crc kubenswrapper[4858]: I0202 17:42:58.918681 4858 scope.go:117] "RemoveContainer" containerID="74530cf9b9be874cadd1d92f12bf70a28231d482712b7e73c15b0838788189fc" Feb 02 17:42:58 crc kubenswrapper[4858]: E0202 17:42:58.919325 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:43:09 crc kubenswrapper[4858]: I0202 17:43:09.059529 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-66l2r"] Feb 02 17:43:09 crc kubenswrapper[4858]: I0202 17:43:09.069502 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-66l2r"] Feb 02 17:43:10 crc kubenswrapper[4858]: I0202 17:43:10.021115 4858 generic.go:334] "Generic (PLEG): container finished" podID="dbcce266-9b8e-489e-935d-17695dd8cf62" containerID="130bc93f6dc5982c724d0ddce200d9151df82b9c27bff4b07bfd9e4d5361b730" exitCode=0 Feb 02 17:43:10 crc kubenswrapper[4858]: I0202 17:43:10.021165 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-65b4t" event={"ID":"dbcce266-9b8e-489e-935d-17695dd8cf62","Type":"ContainerDied","Data":"130bc93f6dc5982c724d0ddce200d9151df82b9c27bff4b07bfd9e4d5361b730"} Feb 02 17:43:10 crc kubenswrapper[4858]: I0202 17:43:10.416451 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3a30ec7-1686-4aa1-b365-1d0516dda2eb" path="/var/lib/kubelet/pods/f3a30ec7-1686-4aa1-b365-1d0516dda2eb/volumes" Feb 02 17:43:10 crc kubenswrapper[4858]: I0202 17:43:10.456024 4858 scope.go:117] "RemoveContainer" containerID="4aeb299b4aee2efe0c1e1e3176138562df162c93b53a75144f200e250528597e" Feb 02 17:43:10 crc kubenswrapper[4858]: I0202 17:43:10.499254 4858 scope.go:117] "RemoveContainer" containerID="c45882112a1870e7e71e994b87b8c98eb8d51d39e7898a8eea14ae128a4432e6" Feb 02 17:43:10 crc kubenswrapper[4858]: I0202 17:43:10.580349 4858 scope.go:117] "RemoveContainer" containerID="de54cf581f700a3cca648b830e80883b34f157cc68f5258f788fc169cf3b75b2" Feb 02 17:43:10 crc kubenswrapper[4858]: I0202 17:43:10.612049 4858 scope.go:117] "RemoveContainer" containerID="c01d8736e6ad4d4fa323c695f4a715a9e645fd67373e42610fb1e528996c6c52" Feb 02 17:43:10 crc kubenswrapper[4858]: I0202 17:43:10.652379 4858 scope.go:117] "RemoveContainer" containerID="2878974141f988a25e49a2f0c4142ca6a5bd797937934fedb077c198f93dbc6a" Feb 02 17:43:10 crc kubenswrapper[4858]: I0202 17:43:10.702001 4858 scope.go:117] "RemoveContainer" containerID="75fa19b9d59b8c498381b434052f49ac0ecf7474e337b035a736a24ac53adfcb" Feb 02 17:43:10 crc kubenswrapper[4858]: I0202 17:43:10.742055 4858 scope.go:117] "RemoveContainer" containerID="240802e54634b3f98bd4efe13b3e6f7f586deaf5aaadcb30f2f96a94d7ffd69d" Feb 02 17:43:11 crc kubenswrapper[4858]: I0202 17:43:11.398669 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-65b4t" Feb 02 17:43:11 crc kubenswrapper[4858]: I0202 17:43:11.502824 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbcce266-9b8e-489e-935d-17695dd8cf62-ssh-key-openstack-edpm-ipam\") pod \"dbcce266-9b8e-489e-935d-17695dd8cf62\" (UID: \"dbcce266-9b8e-489e-935d-17695dd8cf62\") " Feb 02 17:43:11 crc kubenswrapper[4858]: I0202 17:43:11.502879 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbcce266-9b8e-489e-935d-17695dd8cf62-inventory\") pod \"dbcce266-9b8e-489e-935d-17695dd8cf62\" (UID: \"dbcce266-9b8e-489e-935d-17695dd8cf62\") " Feb 02 17:43:11 crc kubenswrapper[4858]: I0202 17:43:11.503085 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xb226\" (UniqueName: \"kubernetes.io/projected/dbcce266-9b8e-489e-935d-17695dd8cf62-kube-api-access-xb226\") pod \"dbcce266-9b8e-489e-935d-17695dd8cf62\" (UID: \"dbcce266-9b8e-489e-935d-17695dd8cf62\") " Feb 02 17:43:11 crc kubenswrapper[4858]: I0202 17:43:11.507900 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbcce266-9b8e-489e-935d-17695dd8cf62-kube-api-access-xb226" (OuterVolumeSpecName: "kube-api-access-xb226") pod "dbcce266-9b8e-489e-935d-17695dd8cf62" (UID: "dbcce266-9b8e-489e-935d-17695dd8cf62"). InnerVolumeSpecName "kube-api-access-xb226". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:43:11 crc kubenswrapper[4858]: I0202 17:43:11.528996 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbcce266-9b8e-489e-935d-17695dd8cf62-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dbcce266-9b8e-489e-935d-17695dd8cf62" (UID: "dbcce266-9b8e-489e-935d-17695dd8cf62"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:43:11 crc kubenswrapper[4858]: I0202 17:43:11.538595 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbcce266-9b8e-489e-935d-17695dd8cf62-inventory" (OuterVolumeSpecName: "inventory") pod "dbcce266-9b8e-489e-935d-17695dd8cf62" (UID: "dbcce266-9b8e-489e-935d-17695dd8cf62"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:43:11 crc kubenswrapper[4858]: I0202 17:43:11.607672 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbcce266-9b8e-489e-935d-17695dd8cf62-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 17:43:11 crc kubenswrapper[4858]: I0202 17:43:11.607705 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbcce266-9b8e-489e-935d-17695dd8cf62-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 17:43:11 crc kubenswrapper[4858]: I0202 17:43:11.607716 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xb226\" (UniqueName: \"kubernetes.io/projected/dbcce266-9b8e-489e-935d-17695dd8cf62-kube-api-access-xb226\") on node \"crc\" DevicePath \"\"" Feb 02 17:43:12 crc kubenswrapper[4858]: I0202 17:43:12.044808 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-65b4t" event={"ID":"dbcce266-9b8e-489e-935d-17695dd8cf62","Type":"ContainerDied","Data":"1acdacb121bfdef81c7f401e91ea0bc2a182e62f75481a141c0260a8b3fdfde1"} Feb 02 17:43:12 crc kubenswrapper[4858]: I0202 17:43:12.044879 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1acdacb121bfdef81c7f401e91ea0bc2a182e62f75481a141c0260a8b3fdfde1" Feb 02 17:43:12 crc kubenswrapper[4858]: I0202 17:43:12.045015 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-65b4t" Feb 02 17:43:12 crc kubenswrapper[4858]: I0202 17:43:12.131453 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9qjdw"] Feb 02 17:43:12 crc kubenswrapper[4858]: E0202 17:43:12.131950 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbcce266-9b8e-489e-935d-17695dd8cf62" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 02 17:43:12 crc kubenswrapper[4858]: I0202 17:43:12.131991 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbcce266-9b8e-489e-935d-17695dd8cf62" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 02 17:43:12 crc kubenswrapper[4858]: I0202 17:43:12.132228 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbcce266-9b8e-489e-935d-17695dd8cf62" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 02 17:43:12 crc kubenswrapper[4858]: I0202 17:43:12.133011 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9qjdw" Feb 02 17:43:12 crc kubenswrapper[4858]: I0202 17:43:12.135371 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 17:43:12 crc kubenswrapper[4858]: I0202 17:43:12.135600 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 17:43:12 crc kubenswrapper[4858]: I0202 17:43:12.135841 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 17:43:12 crc kubenswrapper[4858]: I0202 17:43:12.139814 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9qjdw"] Feb 02 17:43:12 crc kubenswrapper[4858]: I0202 17:43:12.143693 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q7l94" Feb 02 17:43:12 crc kubenswrapper[4858]: I0202 17:43:12.225232 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18853ae6-771f-43f8-a6e9-5501f381891d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9qjdw\" (UID: \"18853ae6-771f-43f8-a6e9-5501f381891d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9qjdw" Feb 02 17:43:12 crc kubenswrapper[4858]: I0202 17:43:12.225380 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhwh6\" (UniqueName: \"kubernetes.io/projected/18853ae6-771f-43f8-a6e9-5501f381891d-kube-api-access-rhwh6\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9qjdw\" (UID: \"18853ae6-771f-43f8-a6e9-5501f381891d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9qjdw" Feb 02 17:43:12 crc kubenswrapper[4858]: I0202 17:43:12.225429 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/18853ae6-771f-43f8-a6e9-5501f381891d-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9qjdw\" (UID: \"18853ae6-771f-43f8-a6e9-5501f381891d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9qjdw" Feb 02 17:43:12 crc kubenswrapper[4858]: I0202 17:43:12.326574 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18853ae6-771f-43f8-a6e9-5501f381891d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9qjdw\" (UID: \"18853ae6-771f-43f8-a6e9-5501f381891d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9qjdw" Feb 02 17:43:12 crc kubenswrapper[4858]: I0202 17:43:12.326687 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhwh6\" (UniqueName: \"kubernetes.io/projected/18853ae6-771f-43f8-a6e9-5501f381891d-kube-api-access-rhwh6\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9qjdw\" (UID: \"18853ae6-771f-43f8-a6e9-5501f381891d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9qjdw" Feb 02 17:43:12 crc kubenswrapper[4858]: I0202 17:43:12.326729 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/18853ae6-771f-43f8-a6e9-5501f381891d-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9qjdw\" (UID: \"18853ae6-771f-43f8-a6e9-5501f381891d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9qjdw" Feb 02 17:43:12 crc kubenswrapper[4858]: I0202 17:43:12.333549 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/18853ae6-771f-43f8-a6e9-5501f381891d-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9qjdw\" (UID: \"18853ae6-771f-43f8-a6e9-5501f381891d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9qjdw" Feb 02 17:43:12 crc kubenswrapper[4858]: I0202 17:43:12.337821 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18853ae6-771f-43f8-a6e9-5501f381891d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9qjdw\" (UID: \"18853ae6-771f-43f8-a6e9-5501f381891d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9qjdw" Feb 02 17:43:12 crc kubenswrapper[4858]: I0202 17:43:12.342705 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhwh6\" (UniqueName: \"kubernetes.io/projected/18853ae6-771f-43f8-a6e9-5501f381891d-kube-api-access-rhwh6\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9qjdw\" (UID: \"18853ae6-771f-43f8-a6e9-5501f381891d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9qjdw" Feb 02 17:43:12 crc kubenswrapper[4858]: I0202 17:43:12.400546 4858 scope.go:117] "RemoveContainer" containerID="74530cf9b9be874cadd1d92f12bf70a28231d482712b7e73c15b0838788189fc" Feb 02 17:43:12 crc kubenswrapper[4858]: E0202 17:43:12.401299 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:43:12 crc kubenswrapper[4858]: I0202 17:43:12.458883 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9qjdw" Feb 02 17:43:12 crc kubenswrapper[4858]: I0202 17:43:12.962466 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9qjdw"] Feb 02 17:43:13 crc kubenswrapper[4858]: I0202 17:43:13.056090 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9qjdw" event={"ID":"18853ae6-771f-43f8-a6e9-5501f381891d","Type":"ContainerStarted","Data":"496fc82d8aee4c6bbb2694749b76e02dbef9322b98509e6e33a4433b23234535"} Feb 02 17:43:14 crc kubenswrapper[4858]: I0202 17:43:14.068413 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9qjdw" event={"ID":"18853ae6-771f-43f8-a6e9-5501f381891d","Type":"ContainerStarted","Data":"ae8f9b82143112510aa305304457bb3a977774a2cab12ef006417812f227444f"} Feb 02 17:43:14 crc kubenswrapper[4858]: I0202 17:43:14.095027 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9qjdw" podStartSLOduration=1.518361522 podStartE2EDuration="2.095003729s" podCreationTimestamp="2026-02-02 17:43:12 +0000 UTC" firstStartedPulling="2026-02-02 17:43:12.966573912 +0000 UTC m=+1694.118989187" lastFinishedPulling="2026-02-02 17:43:13.543216119 +0000 UTC m=+1694.695631394" observedRunningTime="2026-02-02 17:43:14.089650837 +0000 UTC m=+1695.242066132" watchObservedRunningTime="2026-02-02 17:43:14.095003729 +0000 UTC m=+1695.247419014" Feb 02 17:43:25 crc kubenswrapper[4858]: I0202 17:43:25.401279 4858 scope.go:117] "RemoveContainer" containerID="74530cf9b9be874cadd1d92f12bf70a28231d482712b7e73c15b0838788189fc" Feb 02 17:43:25 crc kubenswrapper[4858]: E0202 17:43:25.401926 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:43:32 crc kubenswrapper[4858]: I0202 17:43:32.031520 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-8ckkf"] Feb 02 17:43:32 crc kubenswrapper[4858]: I0202 17:43:32.045742 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-8ckkf"] Feb 02 17:43:32 crc kubenswrapper[4858]: I0202 17:43:32.416307 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="449436cd-88ef-480a-9905-8b120f723f8f" path="/var/lib/kubelet/pods/449436cd-88ef-480a-9905-8b120f723f8f/volumes" Feb 02 17:43:33 crc kubenswrapper[4858]: I0202 17:43:33.033260 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-n46db"] Feb 02 17:43:33 crc kubenswrapper[4858]: I0202 17:43:33.045964 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-n46db"] Feb 02 17:43:34 crc kubenswrapper[4858]: I0202 17:43:34.411614 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9127c71e-926f-4e20-b766-a957645d7dc9" path="/var/lib/kubelet/pods/9127c71e-926f-4e20-b766-a957645d7dc9/volumes" Feb 02 17:43:39 crc kubenswrapper[4858]: I0202 17:43:39.401189 4858 scope.go:117] "RemoveContainer" containerID="74530cf9b9be874cadd1d92f12bf70a28231d482712b7e73c15b0838788189fc" Feb 02 17:43:39 crc kubenswrapper[4858]: E0202 17:43:39.402039 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:43:50 crc kubenswrapper[4858]: I0202 17:43:50.407086 4858 scope.go:117] "RemoveContainer" containerID="74530cf9b9be874cadd1d92f12bf70a28231d482712b7e73c15b0838788189fc" Feb 02 17:43:50 crc kubenswrapper[4858]: E0202 17:43:50.408361 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:43:59 crc kubenswrapper[4858]: I0202 17:43:59.492920 4858 generic.go:334] "Generic (PLEG): container finished" podID="18853ae6-771f-43f8-a6e9-5501f381891d" containerID="ae8f9b82143112510aa305304457bb3a977774a2cab12ef006417812f227444f" exitCode=0 Feb 02 17:43:59 crc kubenswrapper[4858]: I0202 17:43:59.493088 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9qjdw" event={"ID":"18853ae6-771f-43f8-a6e9-5501f381891d","Type":"ContainerDied","Data":"ae8f9b82143112510aa305304457bb3a977774a2cab12ef006417812f227444f"} Feb 02 17:44:00 crc kubenswrapper[4858]: I0202 17:44:00.968482 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9qjdw" Feb 02 17:44:01 crc kubenswrapper[4858]: I0202 17:44:01.096708 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18853ae6-771f-43f8-a6e9-5501f381891d-inventory\") pod \"18853ae6-771f-43f8-a6e9-5501f381891d\" (UID: \"18853ae6-771f-43f8-a6e9-5501f381891d\") " Feb 02 17:44:01 crc kubenswrapper[4858]: I0202 17:44:01.096871 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhwh6\" (UniqueName: \"kubernetes.io/projected/18853ae6-771f-43f8-a6e9-5501f381891d-kube-api-access-rhwh6\") pod \"18853ae6-771f-43f8-a6e9-5501f381891d\" (UID: \"18853ae6-771f-43f8-a6e9-5501f381891d\") " Feb 02 17:44:01 crc kubenswrapper[4858]: I0202 17:44:01.096909 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/18853ae6-771f-43f8-a6e9-5501f381891d-ssh-key-openstack-edpm-ipam\") pod \"18853ae6-771f-43f8-a6e9-5501f381891d\" (UID: \"18853ae6-771f-43f8-a6e9-5501f381891d\") " Feb 02 17:44:01 crc kubenswrapper[4858]: I0202 17:44:01.103594 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18853ae6-771f-43f8-a6e9-5501f381891d-kube-api-access-rhwh6" (OuterVolumeSpecName: "kube-api-access-rhwh6") pod "18853ae6-771f-43f8-a6e9-5501f381891d" (UID: "18853ae6-771f-43f8-a6e9-5501f381891d"). InnerVolumeSpecName "kube-api-access-rhwh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:44:01 crc kubenswrapper[4858]: I0202 17:44:01.129744 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18853ae6-771f-43f8-a6e9-5501f381891d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "18853ae6-771f-43f8-a6e9-5501f381891d" (UID: "18853ae6-771f-43f8-a6e9-5501f381891d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:44:01 crc kubenswrapper[4858]: I0202 17:44:01.131938 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18853ae6-771f-43f8-a6e9-5501f381891d-inventory" (OuterVolumeSpecName: "inventory") pod "18853ae6-771f-43f8-a6e9-5501f381891d" (UID: "18853ae6-771f-43f8-a6e9-5501f381891d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:44:01 crc kubenswrapper[4858]: I0202 17:44:01.199182 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhwh6\" (UniqueName: \"kubernetes.io/projected/18853ae6-771f-43f8-a6e9-5501f381891d-kube-api-access-rhwh6\") on node \"crc\" DevicePath \"\"" Feb 02 17:44:01 crc kubenswrapper[4858]: I0202 17:44:01.199222 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/18853ae6-771f-43f8-a6e9-5501f381891d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 17:44:01 crc kubenswrapper[4858]: I0202 17:44:01.199233 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18853ae6-771f-43f8-a6e9-5501f381891d-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 17:44:01 crc kubenswrapper[4858]: I0202 17:44:01.400946 4858 scope.go:117] "RemoveContainer" containerID="74530cf9b9be874cadd1d92f12bf70a28231d482712b7e73c15b0838788189fc" Feb 02 17:44:01 crc kubenswrapper[4858]: E0202 17:44:01.401444 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:44:01 crc kubenswrapper[4858]: I0202 17:44:01.514731 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9qjdw" event={"ID":"18853ae6-771f-43f8-a6e9-5501f381891d","Type":"ContainerDied","Data":"496fc82d8aee4c6bbb2694749b76e02dbef9322b98509e6e33a4433b23234535"} Feb 02 17:44:01 crc kubenswrapper[4858]: I0202 17:44:01.515112 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="496fc82d8aee4c6bbb2694749b76e02dbef9322b98509e6e33a4433b23234535" Feb 02 17:44:01 crc kubenswrapper[4858]: I0202 17:44:01.514841 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9qjdw" Feb 02 17:44:01 crc kubenswrapper[4858]: I0202 17:44:01.617908 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-2d55x"] Feb 02 17:44:01 crc kubenswrapper[4858]: E0202 17:44:01.619267 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18853ae6-771f-43f8-a6e9-5501f381891d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 02 17:44:01 crc kubenswrapper[4858]: I0202 17:44:01.619288 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="18853ae6-771f-43f8-a6e9-5501f381891d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 02 17:44:01 crc kubenswrapper[4858]: I0202 17:44:01.620186 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="18853ae6-771f-43f8-a6e9-5501f381891d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 02 17:44:01 crc kubenswrapper[4858]: I0202 17:44:01.621530 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-2d55x" Feb 02 17:44:01 crc kubenswrapper[4858]: I0202 17:44:01.625032 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 17:44:01 crc kubenswrapper[4858]: I0202 17:44:01.625232 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 17:44:01 crc kubenswrapper[4858]: I0202 17:44:01.625638 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 17:44:01 crc kubenswrapper[4858]: I0202 17:44:01.625795 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q7l94" Feb 02 17:44:01 crc kubenswrapper[4858]: I0202 17:44:01.630369 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-2d55x"] Feb 02 17:44:01 crc kubenswrapper[4858]: I0202 17:44:01.709235 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ef22884-b1a4-454a-afa5-cde0aaa3439b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-2d55x\" (UID: \"4ef22884-b1a4-454a-afa5-cde0aaa3439b\") " pod="openstack/ssh-known-hosts-edpm-deployment-2d55x" Feb 02 17:44:01 crc kubenswrapper[4858]: I0202 17:44:01.709412 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/4ef22884-b1a4-454a-afa5-cde0aaa3439b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-2d55x\" (UID: \"4ef22884-b1a4-454a-afa5-cde0aaa3439b\") " pod="openstack/ssh-known-hosts-edpm-deployment-2d55x" Feb 02 17:44:01 crc kubenswrapper[4858]: I0202 17:44:01.709602 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shqr5\" (UniqueName: \"kubernetes.io/projected/4ef22884-b1a4-454a-afa5-cde0aaa3439b-kube-api-access-shqr5\") pod \"ssh-known-hosts-edpm-deployment-2d55x\" (UID: \"4ef22884-b1a4-454a-afa5-cde0aaa3439b\") " pod="openstack/ssh-known-hosts-edpm-deployment-2d55x" Feb 02 17:44:01 crc kubenswrapper[4858]: I0202 17:44:01.811415 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/4ef22884-b1a4-454a-afa5-cde0aaa3439b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-2d55x\" (UID: \"4ef22884-b1a4-454a-afa5-cde0aaa3439b\") " pod="openstack/ssh-known-hosts-edpm-deployment-2d55x" Feb 02 17:44:01 crc kubenswrapper[4858]: I0202 17:44:01.811525 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shqr5\" (UniqueName: \"kubernetes.io/projected/4ef22884-b1a4-454a-afa5-cde0aaa3439b-kube-api-access-shqr5\") pod \"ssh-known-hosts-edpm-deployment-2d55x\" (UID: \"4ef22884-b1a4-454a-afa5-cde0aaa3439b\") " pod="openstack/ssh-known-hosts-edpm-deployment-2d55x" Feb 02 17:44:01 crc kubenswrapper[4858]: I0202 17:44:01.811638 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ef22884-b1a4-454a-afa5-cde0aaa3439b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-2d55x\" (UID: \"4ef22884-b1a4-454a-afa5-cde0aaa3439b\") " pod="openstack/ssh-known-hosts-edpm-deployment-2d55x" Feb 02 17:44:01 crc kubenswrapper[4858]: I0202 17:44:01.816303 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ef22884-b1a4-454a-afa5-cde0aaa3439b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-2d55x\" (UID: \"4ef22884-b1a4-454a-afa5-cde0aaa3439b\") " pod="openstack/ssh-known-hosts-edpm-deployment-2d55x" Feb 02 17:44:01 crc kubenswrapper[4858]: I0202 17:44:01.821603 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/4ef22884-b1a4-454a-afa5-cde0aaa3439b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-2d55x\" (UID: \"4ef22884-b1a4-454a-afa5-cde0aaa3439b\") " pod="openstack/ssh-known-hosts-edpm-deployment-2d55x" Feb 02 17:44:01 crc kubenswrapper[4858]: I0202 17:44:01.831956 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shqr5\" (UniqueName: \"kubernetes.io/projected/4ef22884-b1a4-454a-afa5-cde0aaa3439b-kube-api-access-shqr5\") pod \"ssh-known-hosts-edpm-deployment-2d55x\" (UID: \"4ef22884-b1a4-454a-afa5-cde0aaa3439b\") " pod="openstack/ssh-known-hosts-edpm-deployment-2d55x" Feb 02 17:44:01 crc kubenswrapper[4858]: I0202 17:44:01.944271 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-2d55x" Feb 02 17:44:02 crc kubenswrapper[4858]: I0202 17:44:02.316846 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-2d55x"] Feb 02 17:44:02 crc kubenswrapper[4858]: I0202 17:44:02.523903 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-2d55x" event={"ID":"4ef22884-b1a4-454a-afa5-cde0aaa3439b","Type":"ContainerStarted","Data":"fda5e43e86325c85396d9f83e759870a07c3e56d1c94a5d57f462eb262b386b0"} Feb 02 17:44:03 crc kubenswrapper[4858]: I0202 17:44:03.533269 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-2d55x" event={"ID":"4ef22884-b1a4-454a-afa5-cde0aaa3439b","Type":"ContainerStarted","Data":"7eed1482475d78346748ddc3551164b78d7422f42362e55f8cf153e162bf39f7"} Feb 02 17:44:03 crc kubenswrapper[4858]: I0202 17:44:03.552352 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-2d55x" podStartSLOduration=2.057809805 podStartE2EDuration="2.552330118s" podCreationTimestamp="2026-02-02 17:44:01 +0000 UTC" firstStartedPulling="2026-02-02 17:44:02.321100604 +0000 UTC m=+1743.473515869" lastFinishedPulling="2026-02-02 17:44:02.815620887 +0000 UTC m=+1743.968036182" observedRunningTime="2026-02-02 17:44:03.55099767 +0000 UTC m=+1744.703412925" watchObservedRunningTime="2026-02-02 17:44:03.552330118 +0000 UTC m=+1744.704745383" Feb 02 17:44:09 crc kubenswrapper[4858]: I0202 17:44:09.604522 4858 generic.go:334] "Generic (PLEG): container finished" podID="4ef22884-b1a4-454a-afa5-cde0aaa3439b" containerID="7eed1482475d78346748ddc3551164b78d7422f42362e55f8cf153e162bf39f7" exitCode=0 Feb 02 17:44:09 crc kubenswrapper[4858]: I0202 17:44:09.604600 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-2d55x" event={"ID":"4ef22884-b1a4-454a-afa5-cde0aaa3439b","Type":"ContainerDied","Data":"7eed1482475d78346748ddc3551164b78d7422f42362e55f8cf153e162bf39f7"} Feb 02 17:44:10 crc kubenswrapper[4858]: I0202 17:44:10.884239 4858 scope.go:117] "RemoveContainer" containerID="48f72f956691769808795f3d948f0ee552f304181e03248add235e327356b436" Feb 02 17:44:10 crc kubenswrapper[4858]: I0202 17:44:10.928641 4858 scope.go:117] "RemoveContainer" containerID="6e98a6c8eb16216683547a70ceb430cb4ecf23589c16b81ac4ded50890f97096" Feb 02 17:44:11 crc kubenswrapper[4858]: I0202 17:44:11.042419 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-2d55x" Feb 02 17:44:11 crc kubenswrapper[4858]: I0202 17:44:11.101075 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/4ef22884-b1a4-454a-afa5-cde0aaa3439b-inventory-0\") pod \"4ef22884-b1a4-454a-afa5-cde0aaa3439b\" (UID: \"4ef22884-b1a4-454a-afa5-cde0aaa3439b\") " Feb 02 17:44:11 crc kubenswrapper[4858]: I0202 17:44:11.101315 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ef22884-b1a4-454a-afa5-cde0aaa3439b-ssh-key-openstack-edpm-ipam\") pod \"4ef22884-b1a4-454a-afa5-cde0aaa3439b\" (UID: \"4ef22884-b1a4-454a-afa5-cde0aaa3439b\") " Feb 02 17:44:11 crc kubenswrapper[4858]: I0202 17:44:11.101442 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shqr5\" (UniqueName: \"kubernetes.io/projected/4ef22884-b1a4-454a-afa5-cde0aaa3439b-kube-api-access-shqr5\") pod \"4ef22884-b1a4-454a-afa5-cde0aaa3439b\" (UID: \"4ef22884-b1a4-454a-afa5-cde0aaa3439b\") " Feb 02 17:44:11 crc kubenswrapper[4858]: I0202 17:44:11.107754 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ef22884-b1a4-454a-afa5-cde0aaa3439b-kube-api-access-shqr5" (OuterVolumeSpecName: "kube-api-access-shqr5") pod "4ef22884-b1a4-454a-afa5-cde0aaa3439b" (UID: "4ef22884-b1a4-454a-afa5-cde0aaa3439b"). InnerVolumeSpecName "kube-api-access-shqr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:44:11 crc kubenswrapper[4858]: I0202 17:44:11.132795 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ef22884-b1a4-454a-afa5-cde0aaa3439b-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "4ef22884-b1a4-454a-afa5-cde0aaa3439b" (UID: "4ef22884-b1a4-454a-afa5-cde0aaa3439b"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:44:11 crc kubenswrapper[4858]: I0202 17:44:11.148180 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ef22884-b1a4-454a-afa5-cde0aaa3439b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4ef22884-b1a4-454a-afa5-cde0aaa3439b" (UID: "4ef22884-b1a4-454a-afa5-cde0aaa3439b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:44:11 crc kubenswrapper[4858]: I0202 17:44:11.204134 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4ef22884-b1a4-454a-afa5-cde0aaa3439b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 17:44:11 crc kubenswrapper[4858]: I0202 17:44:11.204178 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shqr5\" (UniqueName: \"kubernetes.io/projected/4ef22884-b1a4-454a-afa5-cde0aaa3439b-kube-api-access-shqr5\") on node \"crc\" DevicePath \"\"" Feb 02 17:44:11 crc kubenswrapper[4858]: I0202 17:44:11.204192 4858 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/4ef22884-b1a4-454a-afa5-cde0aaa3439b-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 02 17:44:11 crc kubenswrapper[4858]: I0202 17:44:11.629867 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-2d55x" event={"ID":"4ef22884-b1a4-454a-afa5-cde0aaa3439b","Type":"ContainerDied","Data":"fda5e43e86325c85396d9f83e759870a07c3e56d1c94a5d57f462eb262b386b0"} Feb 02 17:44:11 crc kubenswrapper[4858]: I0202 17:44:11.629940 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fda5e43e86325c85396d9f83e759870a07c3e56d1c94a5d57f462eb262b386b0" Feb 02 17:44:11 crc kubenswrapper[4858]: I0202 17:44:11.630436 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-2d55x" Feb 02 17:44:11 crc kubenswrapper[4858]: I0202 17:44:11.717213 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-ztwkx"] Feb 02 17:44:11 crc kubenswrapper[4858]: E0202 17:44:11.717910 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ef22884-b1a4-454a-afa5-cde0aaa3439b" containerName="ssh-known-hosts-edpm-deployment" Feb 02 17:44:11 crc kubenswrapper[4858]: I0202 17:44:11.717948 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ef22884-b1a4-454a-afa5-cde0aaa3439b" containerName="ssh-known-hosts-edpm-deployment" Feb 02 17:44:11 crc kubenswrapper[4858]: I0202 17:44:11.718357 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ef22884-b1a4-454a-afa5-cde0aaa3439b" containerName="ssh-known-hosts-edpm-deployment" Feb 02 17:44:11 crc kubenswrapper[4858]: I0202 17:44:11.719343 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ztwkx" Feb 02 17:44:11 crc kubenswrapper[4858]: I0202 17:44:11.722426 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 17:44:11 crc kubenswrapper[4858]: I0202 17:44:11.723082 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q7l94" Feb 02 17:44:11 crc kubenswrapper[4858]: I0202 17:44:11.723897 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 17:44:11 crc kubenswrapper[4858]: I0202 17:44:11.726129 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 17:44:11 crc kubenswrapper[4858]: I0202 17:44:11.726769 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-ztwkx"] Feb 02 17:44:11 crc kubenswrapper[4858]: I0202 17:44:11.813526 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a932e2b-79f7-41ef-b7e6-1e0789b67551-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ztwkx\" (UID: \"4a932e2b-79f7-41ef-b7e6-1e0789b67551\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ztwkx" Feb 02 17:44:11 crc kubenswrapper[4858]: I0202 17:44:11.813773 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a932e2b-79f7-41ef-b7e6-1e0789b67551-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ztwkx\" (UID: \"4a932e2b-79f7-41ef-b7e6-1e0789b67551\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ztwkx" Feb 02 17:44:11 crc kubenswrapper[4858]: I0202 17:44:11.814071 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7qc5\" (UniqueName: \"kubernetes.io/projected/4a932e2b-79f7-41ef-b7e6-1e0789b67551-kube-api-access-t7qc5\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ztwkx\" (UID: \"4a932e2b-79f7-41ef-b7e6-1e0789b67551\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ztwkx" Feb 02 17:44:11 crc kubenswrapper[4858]: I0202 17:44:11.916071 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7qc5\" (UniqueName: \"kubernetes.io/projected/4a932e2b-79f7-41ef-b7e6-1e0789b67551-kube-api-access-t7qc5\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ztwkx\" (UID: \"4a932e2b-79f7-41ef-b7e6-1e0789b67551\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ztwkx" Feb 02 17:44:11 crc kubenswrapper[4858]: I0202 17:44:11.916149 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a932e2b-79f7-41ef-b7e6-1e0789b67551-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ztwkx\" (UID: \"4a932e2b-79f7-41ef-b7e6-1e0789b67551\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ztwkx" Feb 02 17:44:11 crc kubenswrapper[4858]: I0202 17:44:11.916234 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a932e2b-79f7-41ef-b7e6-1e0789b67551-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ztwkx\" (UID: \"4a932e2b-79f7-41ef-b7e6-1e0789b67551\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ztwkx" Feb 02 17:44:11 crc kubenswrapper[4858]: I0202 17:44:11.922698 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a932e2b-79f7-41ef-b7e6-1e0789b67551-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ztwkx\" (UID: \"4a932e2b-79f7-41ef-b7e6-1e0789b67551\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ztwkx" Feb 02 17:44:11 crc kubenswrapper[4858]: I0202 17:44:11.922699 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a932e2b-79f7-41ef-b7e6-1e0789b67551-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ztwkx\" (UID: \"4a932e2b-79f7-41ef-b7e6-1e0789b67551\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ztwkx" Feb 02 17:44:11 crc kubenswrapper[4858]: I0202 17:44:11.935879 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7qc5\" (UniqueName: \"kubernetes.io/projected/4a932e2b-79f7-41ef-b7e6-1e0789b67551-kube-api-access-t7qc5\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ztwkx\" (UID: \"4a932e2b-79f7-41ef-b7e6-1e0789b67551\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ztwkx" Feb 02 17:44:12 crc kubenswrapper[4858]: I0202 17:44:12.060321 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ztwkx" Feb 02 17:44:12 crc kubenswrapper[4858]: I0202 17:44:12.401170 4858 scope.go:117] "RemoveContainer" containerID="74530cf9b9be874cadd1d92f12bf70a28231d482712b7e73c15b0838788189fc" Feb 02 17:44:12 crc kubenswrapper[4858]: E0202 17:44:12.401459 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:44:12 crc kubenswrapper[4858]: I0202 17:44:12.606763 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-ztwkx"] Feb 02 17:44:12 crc kubenswrapper[4858]: I0202 17:44:12.638186 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ztwkx" event={"ID":"4a932e2b-79f7-41ef-b7e6-1e0789b67551","Type":"ContainerStarted","Data":"ae5d7cc26e2d8568e184947b19b8687b1fb6956f46af660baf3b78d9557cc0b5"} Feb 02 17:44:13 crc kubenswrapper[4858]: I0202 17:44:13.649648 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ztwkx" event={"ID":"4a932e2b-79f7-41ef-b7e6-1e0789b67551","Type":"ContainerStarted","Data":"071fffe1f65c910f45ced590f0c991718b828736e99afd51bbdd38028151614c"} Feb 02 17:44:13 crc kubenswrapper[4858]: I0202 17:44:13.678650 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ztwkx" podStartSLOduration=2.236485152 podStartE2EDuration="2.678621048s" podCreationTimestamp="2026-02-02 17:44:11 +0000 UTC" firstStartedPulling="2026-02-02 17:44:12.610501186 +0000 UTC m=+1753.762916451" lastFinishedPulling="2026-02-02 17:44:13.052637072 +0000 UTC m=+1754.205052347" observedRunningTime="2026-02-02 17:44:13.67094854 +0000 UTC m=+1754.823363835" watchObservedRunningTime="2026-02-02 17:44:13.678621048 +0000 UTC m=+1754.831036353" Feb 02 17:44:16 crc kubenswrapper[4858]: I0202 17:44:16.041839 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-dgbrn"] Feb 02 17:44:16 crc kubenswrapper[4858]: I0202 17:44:16.050469 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-dgbrn"] Feb 02 17:44:16 crc kubenswrapper[4858]: I0202 17:44:16.416693 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527" path="/var/lib/kubelet/pods/40aaa4b9-c5a1-43ba-ba2f-d2fa2f0b5527/volumes" Feb 02 17:44:21 crc kubenswrapper[4858]: I0202 17:44:21.753288 4858 generic.go:334] "Generic (PLEG): container finished" podID="4a932e2b-79f7-41ef-b7e6-1e0789b67551" containerID="071fffe1f65c910f45ced590f0c991718b828736e99afd51bbdd38028151614c" exitCode=0 Feb 02 17:44:21 crc kubenswrapper[4858]: I0202 17:44:21.753410 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ztwkx" event={"ID":"4a932e2b-79f7-41ef-b7e6-1e0789b67551","Type":"ContainerDied","Data":"071fffe1f65c910f45ced590f0c991718b828736e99afd51bbdd38028151614c"} Feb 02 17:44:23 crc kubenswrapper[4858]: I0202 17:44:23.157509 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ztwkx" Feb 02 17:44:23 crc kubenswrapper[4858]: I0202 17:44:23.228887 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a932e2b-79f7-41ef-b7e6-1e0789b67551-inventory\") pod \"4a932e2b-79f7-41ef-b7e6-1e0789b67551\" (UID: \"4a932e2b-79f7-41ef-b7e6-1e0789b67551\") " Feb 02 17:44:23 crc kubenswrapper[4858]: I0202 17:44:23.228962 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a932e2b-79f7-41ef-b7e6-1e0789b67551-ssh-key-openstack-edpm-ipam\") pod \"4a932e2b-79f7-41ef-b7e6-1e0789b67551\" (UID: \"4a932e2b-79f7-41ef-b7e6-1e0789b67551\") " Feb 02 17:44:23 crc kubenswrapper[4858]: I0202 17:44:23.229007 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7qc5\" (UniqueName: \"kubernetes.io/projected/4a932e2b-79f7-41ef-b7e6-1e0789b67551-kube-api-access-t7qc5\") pod \"4a932e2b-79f7-41ef-b7e6-1e0789b67551\" (UID: \"4a932e2b-79f7-41ef-b7e6-1e0789b67551\") " Feb 02 17:44:23 crc kubenswrapper[4858]: I0202 17:44:23.235280 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a932e2b-79f7-41ef-b7e6-1e0789b67551-kube-api-access-t7qc5" (OuterVolumeSpecName: "kube-api-access-t7qc5") pod "4a932e2b-79f7-41ef-b7e6-1e0789b67551" (UID: "4a932e2b-79f7-41ef-b7e6-1e0789b67551"). InnerVolumeSpecName "kube-api-access-t7qc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:44:23 crc kubenswrapper[4858]: I0202 17:44:23.257401 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a932e2b-79f7-41ef-b7e6-1e0789b67551-inventory" (OuterVolumeSpecName: "inventory") pod "4a932e2b-79f7-41ef-b7e6-1e0789b67551" (UID: "4a932e2b-79f7-41ef-b7e6-1e0789b67551"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:44:23 crc kubenswrapper[4858]: I0202 17:44:23.283994 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a932e2b-79f7-41ef-b7e6-1e0789b67551-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4a932e2b-79f7-41ef-b7e6-1e0789b67551" (UID: "4a932e2b-79f7-41ef-b7e6-1e0789b67551"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:44:23 crc kubenswrapper[4858]: I0202 17:44:23.331052 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a932e2b-79f7-41ef-b7e6-1e0789b67551-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 17:44:23 crc kubenswrapper[4858]: I0202 17:44:23.331088 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a932e2b-79f7-41ef-b7e6-1e0789b67551-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 17:44:23 crc kubenswrapper[4858]: I0202 17:44:23.331099 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7qc5\" (UniqueName: \"kubernetes.io/projected/4a932e2b-79f7-41ef-b7e6-1e0789b67551-kube-api-access-t7qc5\") on node \"crc\" DevicePath \"\"" Feb 02 17:44:23 crc kubenswrapper[4858]: I0202 17:44:23.400281 4858 scope.go:117] "RemoveContainer" containerID="74530cf9b9be874cadd1d92f12bf70a28231d482712b7e73c15b0838788189fc" Feb 02 17:44:23 crc kubenswrapper[4858]: E0202 17:44:23.400523 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:44:23 crc kubenswrapper[4858]: I0202 17:44:23.778088 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ztwkx" event={"ID":"4a932e2b-79f7-41ef-b7e6-1e0789b67551","Type":"ContainerDied","Data":"ae5d7cc26e2d8568e184947b19b8687b1fb6956f46af660baf3b78d9557cc0b5"} Feb 02 17:44:23 crc kubenswrapper[4858]: I0202 17:44:23.778307 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae5d7cc26e2d8568e184947b19b8687b1fb6956f46af660baf3b78d9557cc0b5" Feb 02 17:44:23 crc kubenswrapper[4858]: I0202 17:44:23.778215 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ztwkx" Feb 02 17:44:23 crc kubenswrapper[4858]: I0202 17:44:23.862050 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tvvsw"] Feb 02 17:44:23 crc kubenswrapper[4858]: E0202 17:44:23.862409 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a932e2b-79f7-41ef-b7e6-1e0789b67551" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 02 17:44:23 crc kubenswrapper[4858]: I0202 17:44:23.862425 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a932e2b-79f7-41ef-b7e6-1e0789b67551" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 02 17:44:23 crc kubenswrapper[4858]: I0202 17:44:23.862641 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a932e2b-79f7-41ef-b7e6-1e0789b67551" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 02 17:44:23 crc kubenswrapper[4858]: I0202 17:44:23.863304 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tvvsw" Feb 02 17:44:23 crc kubenswrapper[4858]: I0202 17:44:23.865091 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 17:44:23 crc kubenswrapper[4858]: I0202 17:44:23.865258 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q7l94" Feb 02 17:44:23 crc kubenswrapper[4858]: I0202 17:44:23.865542 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 17:44:23 crc kubenswrapper[4858]: I0202 17:44:23.876651 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tvvsw"] Feb 02 17:44:23 crc kubenswrapper[4858]: I0202 17:44:23.892867 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 17:44:23 crc kubenswrapper[4858]: I0202 17:44:23.942437 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5dtm\" (UniqueName: \"kubernetes.io/projected/29249271-e3d7-41c6-8795-5c1b969161e0-kube-api-access-h5dtm\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tvvsw\" (UID: \"29249271-e3d7-41c6-8795-5c1b969161e0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tvvsw" Feb 02 17:44:23 crc kubenswrapper[4858]: I0202 17:44:23.942554 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29249271-e3d7-41c6-8795-5c1b969161e0-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tvvsw\" (UID: \"29249271-e3d7-41c6-8795-5c1b969161e0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tvvsw" Feb 02 17:44:23 crc kubenswrapper[4858]: I0202 17:44:23.942606 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/29249271-e3d7-41c6-8795-5c1b969161e0-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tvvsw\" (UID: \"29249271-e3d7-41c6-8795-5c1b969161e0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tvvsw" Feb 02 17:44:24 crc kubenswrapper[4858]: I0202 17:44:24.044490 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29249271-e3d7-41c6-8795-5c1b969161e0-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tvvsw\" (UID: \"29249271-e3d7-41c6-8795-5c1b969161e0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tvvsw" Feb 02 17:44:24 crc kubenswrapper[4858]: I0202 17:44:24.044627 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/29249271-e3d7-41c6-8795-5c1b969161e0-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tvvsw\" (UID: \"29249271-e3d7-41c6-8795-5c1b969161e0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tvvsw" Feb 02 17:44:24 crc kubenswrapper[4858]: I0202 17:44:24.044812 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5dtm\" (UniqueName: \"kubernetes.io/projected/29249271-e3d7-41c6-8795-5c1b969161e0-kube-api-access-h5dtm\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tvvsw\" (UID: \"29249271-e3d7-41c6-8795-5c1b969161e0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tvvsw" Feb 02 17:44:24 crc kubenswrapper[4858]: I0202 17:44:24.049665 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29249271-e3d7-41c6-8795-5c1b969161e0-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tvvsw\" (UID: \"29249271-e3d7-41c6-8795-5c1b969161e0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tvvsw" Feb 02 17:44:24 crc kubenswrapper[4858]: I0202 17:44:24.050308 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/29249271-e3d7-41c6-8795-5c1b969161e0-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tvvsw\" (UID: \"29249271-e3d7-41c6-8795-5c1b969161e0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tvvsw" Feb 02 17:44:24 crc kubenswrapper[4858]: I0202 17:44:24.062495 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5dtm\" (UniqueName: \"kubernetes.io/projected/29249271-e3d7-41c6-8795-5c1b969161e0-kube-api-access-h5dtm\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tvvsw\" (UID: \"29249271-e3d7-41c6-8795-5c1b969161e0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tvvsw" Feb 02 17:44:24 crc kubenswrapper[4858]: I0202 17:44:24.208880 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tvvsw" Feb 02 17:44:24 crc kubenswrapper[4858]: I0202 17:44:24.732866 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tvvsw"] Feb 02 17:44:24 crc kubenswrapper[4858]: I0202 17:44:24.789250 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tvvsw" event={"ID":"29249271-e3d7-41c6-8795-5c1b969161e0","Type":"ContainerStarted","Data":"e8ef7ac5da66795de7f3c74c7b6e8f8b7c61534a461e864c7201cdfa03d33271"} Feb 02 17:44:25 crc kubenswrapper[4858]: I0202 17:44:25.797500 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tvvsw" event={"ID":"29249271-e3d7-41c6-8795-5c1b969161e0","Type":"ContainerStarted","Data":"1a2d34602bdb77fedfae99ac7405f1c5ea7826471ef0c76110df06061dbbd50b"} Feb 02 17:44:25 crc kubenswrapper[4858]: I0202 17:44:25.817485 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tvvsw" podStartSLOduration=2.230657109 podStartE2EDuration="2.817458813s" podCreationTimestamp="2026-02-02 17:44:23 +0000 UTC" firstStartedPulling="2026-02-02 17:44:24.733408267 +0000 UTC m=+1765.885823532" lastFinishedPulling="2026-02-02 17:44:25.320209971 +0000 UTC m=+1766.472625236" observedRunningTime="2026-02-02 17:44:25.810088303 +0000 UTC m=+1766.962503578" watchObservedRunningTime="2026-02-02 17:44:25.817458813 +0000 UTC m=+1766.969874088" Feb 02 17:44:34 crc kubenswrapper[4858]: I0202 17:44:34.877475 4858 generic.go:334] "Generic (PLEG): container finished" podID="29249271-e3d7-41c6-8795-5c1b969161e0" containerID="1a2d34602bdb77fedfae99ac7405f1c5ea7826471ef0c76110df06061dbbd50b" exitCode=0 Feb 02 17:44:34 crc kubenswrapper[4858]: I0202 17:44:34.877571 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tvvsw" event={"ID":"29249271-e3d7-41c6-8795-5c1b969161e0","Type":"ContainerDied","Data":"1a2d34602bdb77fedfae99ac7405f1c5ea7826471ef0c76110df06061dbbd50b"} Feb 02 17:44:36 crc kubenswrapper[4858]: I0202 17:44:36.338584 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tvvsw" Feb 02 17:44:36 crc kubenswrapper[4858]: I0202 17:44:36.488813 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5dtm\" (UniqueName: \"kubernetes.io/projected/29249271-e3d7-41c6-8795-5c1b969161e0-kube-api-access-h5dtm\") pod \"29249271-e3d7-41c6-8795-5c1b969161e0\" (UID: \"29249271-e3d7-41c6-8795-5c1b969161e0\") " Feb 02 17:44:36 crc kubenswrapper[4858]: I0202 17:44:36.489068 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/29249271-e3d7-41c6-8795-5c1b969161e0-ssh-key-openstack-edpm-ipam\") pod \"29249271-e3d7-41c6-8795-5c1b969161e0\" (UID: \"29249271-e3d7-41c6-8795-5c1b969161e0\") " Feb 02 17:44:36 crc kubenswrapper[4858]: I0202 17:44:36.489146 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29249271-e3d7-41c6-8795-5c1b969161e0-inventory\") pod \"29249271-e3d7-41c6-8795-5c1b969161e0\" (UID: \"29249271-e3d7-41c6-8795-5c1b969161e0\") " Feb 02 17:44:36 crc kubenswrapper[4858]: I0202 17:44:36.494389 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29249271-e3d7-41c6-8795-5c1b969161e0-kube-api-access-h5dtm" (OuterVolumeSpecName: "kube-api-access-h5dtm") pod "29249271-e3d7-41c6-8795-5c1b969161e0" (UID: "29249271-e3d7-41c6-8795-5c1b969161e0"). InnerVolumeSpecName "kube-api-access-h5dtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:44:36 crc kubenswrapper[4858]: I0202 17:44:36.514927 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29249271-e3d7-41c6-8795-5c1b969161e0-inventory" (OuterVolumeSpecName: "inventory") pod "29249271-e3d7-41c6-8795-5c1b969161e0" (UID: "29249271-e3d7-41c6-8795-5c1b969161e0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:44:36 crc kubenswrapper[4858]: I0202 17:44:36.519214 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29249271-e3d7-41c6-8795-5c1b969161e0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "29249271-e3d7-41c6-8795-5c1b969161e0" (UID: "29249271-e3d7-41c6-8795-5c1b969161e0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:44:36 crc kubenswrapper[4858]: I0202 17:44:36.592937 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5dtm\" (UniqueName: \"kubernetes.io/projected/29249271-e3d7-41c6-8795-5c1b969161e0-kube-api-access-h5dtm\") on node \"crc\" DevicePath \"\"" Feb 02 17:44:36 crc kubenswrapper[4858]: I0202 17:44:36.593007 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/29249271-e3d7-41c6-8795-5c1b969161e0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 17:44:36 crc kubenswrapper[4858]: I0202 17:44:36.593023 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29249271-e3d7-41c6-8795-5c1b969161e0-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 17:44:36 crc kubenswrapper[4858]: I0202 17:44:36.898155 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tvvsw" event={"ID":"29249271-e3d7-41c6-8795-5c1b969161e0","Type":"ContainerDied","Data":"e8ef7ac5da66795de7f3c74c7b6e8f8b7c61534a461e864c7201cdfa03d33271"} Feb 02 17:44:36 crc kubenswrapper[4858]: I0202 17:44:36.898246 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8ef7ac5da66795de7f3c74c7b6e8f8b7c61534a461e864c7201cdfa03d33271" Feb 02 17:44:36 crc kubenswrapper[4858]: I0202 17:44:36.898684 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tvvsw" Feb 02 17:44:36 crc kubenswrapper[4858]: I0202 17:44:36.995118 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56"] Feb 02 17:44:36 crc kubenswrapper[4858]: E0202 17:44:36.995587 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29249271-e3d7-41c6-8795-5c1b969161e0" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 02 17:44:36 crc kubenswrapper[4858]: I0202 17:44:36.995614 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="29249271-e3d7-41c6-8795-5c1b969161e0" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 02 17:44:36 crc kubenswrapper[4858]: I0202 17:44:36.995888 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="29249271-e3d7-41c6-8795-5c1b969161e0" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 02 17:44:36 crc kubenswrapper[4858]: I0202 17:44:36.997549 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.000439 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.000713 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q7l94" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.000866 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.001043 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gs56\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.001097 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.001111 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gs56\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.001165 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c9df746d-9cca-49c2-88e3-8be52b5e9531-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gs56\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.001227 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gs56\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.001261 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gs56\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.001316 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gs56\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.001360 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gs56\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.001379 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.001411 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c9df746d-9cca-49c2-88e3-8be52b5e9531-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gs56\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.001465 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gs56\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.001698 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gs56\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.001721 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlpcs\" (UniqueName: \"kubernetes.io/projected/c9df746d-9cca-49c2-88e3-8be52b5e9531-kube-api-access-zlpcs\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gs56\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.001744 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c9df746d-9cca-49c2-88e3-8be52b5e9531-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gs56\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.001774 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gs56\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.001798 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c9df746d-9cca-49c2-88e3-8be52b5e9531-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gs56\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.002084 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.002369 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.011074 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56"] Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.017462 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.103208 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c9df746d-9cca-49c2-88e3-8be52b5e9531-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gs56\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.103610 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gs56\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.103812 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gs56\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.103883 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlpcs\" (UniqueName: \"kubernetes.io/projected/c9df746d-9cca-49c2-88e3-8be52b5e9531-kube-api-access-zlpcs\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gs56\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.103921 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c9df746d-9cca-49c2-88e3-8be52b5e9531-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gs56\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.103964 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gs56\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.104024 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c9df746d-9cca-49c2-88e3-8be52b5e9531-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gs56\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.104193 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gs56\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.104249 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gs56\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.104305 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c9df746d-9cca-49c2-88e3-8be52b5e9531-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gs56\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.104378 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gs56\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.104409 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gs56\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.104482 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gs56\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.104541 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gs56\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.109316 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gs56\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.111615 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c9df746d-9cca-49c2-88e3-8be52b5e9531-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gs56\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.122832 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c9df746d-9cca-49c2-88e3-8be52b5e9531-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gs56\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.145092 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gs56\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.145723 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gs56\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.146088 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gs56\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.146614 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gs56\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.147102 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c9df746d-9cca-49c2-88e3-8be52b5e9531-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gs56\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.151667 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gs56\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.154728 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gs56\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.155150 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c9df746d-9cca-49c2-88e3-8be52b5e9531-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gs56\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.155913 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gs56\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.156529 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gs56\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.156728 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlpcs\" (UniqueName: \"kubernetes.io/projected/c9df746d-9cca-49c2-88e3-8be52b5e9531-kube-api-access-zlpcs\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4gs56\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.326945 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.401643 4858 scope.go:117] "RemoveContainer" containerID="74530cf9b9be874cadd1d92f12bf70a28231d482712b7e73c15b0838788189fc" Feb 02 17:44:37 crc kubenswrapper[4858]: E0202 17:44:37.401938 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:44:37 crc kubenswrapper[4858]: I0202 17:44:37.909272 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56"] Feb 02 17:44:38 crc kubenswrapper[4858]: I0202 17:44:38.917795 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" event={"ID":"c9df746d-9cca-49c2-88e3-8be52b5e9531","Type":"ContainerStarted","Data":"cef23906d74d168655732b23cad6bb782378fcdde9e8ae80f4e607d2aa1ac98b"} Feb 02 17:44:38 crc kubenswrapper[4858]: I0202 17:44:38.918145 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" event={"ID":"c9df746d-9cca-49c2-88e3-8be52b5e9531","Type":"ContainerStarted","Data":"235e29db1ad6fe723eca64e7eeec8480693b5ad039ab9e46040bbf709225fb10"} Feb 02 17:44:38 crc kubenswrapper[4858]: I0202 17:44:38.939801 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" podStartSLOduration=2.443523261 podStartE2EDuration="2.939782174s" podCreationTimestamp="2026-02-02 17:44:36 +0000 UTC" firstStartedPulling="2026-02-02 17:44:37.919034568 +0000 UTC m=+1779.071449833" lastFinishedPulling="2026-02-02 17:44:38.415293471 +0000 UTC m=+1779.567708746" observedRunningTime="2026-02-02 17:44:38.932355264 +0000 UTC m=+1780.084770539" watchObservedRunningTime="2026-02-02 17:44:38.939782174 +0000 UTC m=+1780.092197439" Feb 02 17:44:51 crc kubenswrapper[4858]: I0202 17:44:51.400562 4858 scope.go:117] "RemoveContainer" containerID="74530cf9b9be874cadd1d92f12bf70a28231d482712b7e73c15b0838788189fc" Feb 02 17:44:51 crc kubenswrapper[4858]: E0202 17:44:51.401403 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:45:00 crc kubenswrapper[4858]: I0202 17:45:00.148033 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500905-wpg7p"] Feb 02 17:45:00 crc kubenswrapper[4858]: I0202 17:45:00.150149 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500905-wpg7p" Feb 02 17:45:00 crc kubenswrapper[4858]: I0202 17:45:00.153329 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 17:45:00 crc kubenswrapper[4858]: I0202 17:45:00.153740 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 17:45:00 crc kubenswrapper[4858]: I0202 17:45:00.159970 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500905-wpg7p"] Feb 02 17:45:00 crc kubenswrapper[4858]: I0202 17:45:00.255340 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a9d12a7d-cca1-4d5d-8513-b01410bee517-config-volume\") pod \"collect-profiles-29500905-wpg7p\" (UID: \"a9d12a7d-cca1-4d5d-8513-b01410bee517\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500905-wpg7p" Feb 02 17:45:00 crc kubenswrapper[4858]: I0202 17:45:00.255502 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25g5b\" (UniqueName: \"kubernetes.io/projected/a9d12a7d-cca1-4d5d-8513-b01410bee517-kube-api-access-25g5b\") pod \"collect-profiles-29500905-wpg7p\" (UID: \"a9d12a7d-cca1-4d5d-8513-b01410bee517\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500905-wpg7p" Feb 02 17:45:00 crc kubenswrapper[4858]: I0202 17:45:00.255536 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a9d12a7d-cca1-4d5d-8513-b01410bee517-secret-volume\") pod \"collect-profiles-29500905-wpg7p\" (UID: \"a9d12a7d-cca1-4d5d-8513-b01410bee517\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500905-wpg7p" Feb 02 17:45:00 crc kubenswrapper[4858]: I0202 17:45:00.357306 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a9d12a7d-cca1-4d5d-8513-b01410bee517-secret-volume\") pod \"collect-profiles-29500905-wpg7p\" (UID: \"a9d12a7d-cca1-4d5d-8513-b01410bee517\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500905-wpg7p" Feb 02 17:45:00 crc kubenswrapper[4858]: I0202 17:45:00.357668 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a9d12a7d-cca1-4d5d-8513-b01410bee517-config-volume\") pod \"collect-profiles-29500905-wpg7p\" (UID: \"a9d12a7d-cca1-4d5d-8513-b01410bee517\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500905-wpg7p" Feb 02 17:45:00 crc kubenswrapper[4858]: I0202 17:45:00.357850 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25g5b\" (UniqueName: \"kubernetes.io/projected/a9d12a7d-cca1-4d5d-8513-b01410bee517-kube-api-access-25g5b\") pod \"collect-profiles-29500905-wpg7p\" (UID: \"a9d12a7d-cca1-4d5d-8513-b01410bee517\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500905-wpg7p" Feb 02 17:45:00 crc kubenswrapper[4858]: I0202 17:45:00.359868 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 17:45:00 crc kubenswrapper[4858]: I0202 17:45:00.363817 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a9d12a7d-cca1-4d5d-8513-b01410bee517-secret-volume\") pod \"collect-profiles-29500905-wpg7p\" (UID: \"a9d12a7d-cca1-4d5d-8513-b01410bee517\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500905-wpg7p" Feb 02 17:45:00 crc kubenswrapper[4858]: I0202 17:45:00.369590 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a9d12a7d-cca1-4d5d-8513-b01410bee517-config-volume\") pod \"collect-profiles-29500905-wpg7p\" (UID: \"a9d12a7d-cca1-4d5d-8513-b01410bee517\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500905-wpg7p" Feb 02 17:45:00 crc kubenswrapper[4858]: I0202 17:45:00.380209 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25g5b\" (UniqueName: \"kubernetes.io/projected/a9d12a7d-cca1-4d5d-8513-b01410bee517-kube-api-access-25g5b\") pod \"collect-profiles-29500905-wpg7p\" (UID: \"a9d12a7d-cca1-4d5d-8513-b01410bee517\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500905-wpg7p" Feb 02 17:45:00 crc kubenswrapper[4858]: I0202 17:45:00.488006 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 17:45:00 crc kubenswrapper[4858]: I0202 17:45:00.496589 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500905-wpg7p" Feb 02 17:45:00 crc kubenswrapper[4858]: I0202 17:45:00.929955 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500905-wpg7p"] Feb 02 17:45:00 crc kubenswrapper[4858]: W0202 17:45:00.938905 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9d12a7d_cca1_4d5d_8513_b01410bee517.slice/crio-a5f13e9f490788e679ceb1dbde7fe3ebfa0973a4f59b0739d37d80b5090a8cab WatchSource:0}: Error finding container a5f13e9f490788e679ceb1dbde7fe3ebfa0973a4f59b0739d37d80b5090a8cab: Status 404 returned error can't find the container with id a5f13e9f490788e679ceb1dbde7fe3ebfa0973a4f59b0739d37d80b5090a8cab Feb 02 17:45:01 crc kubenswrapper[4858]: I0202 17:45:01.127149 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500905-wpg7p" event={"ID":"a9d12a7d-cca1-4d5d-8513-b01410bee517","Type":"ContainerStarted","Data":"ab4ab2df910a70063fd2c2169e7d4e96acae058f35272565b151e4f19db24406"} Feb 02 17:45:01 crc kubenswrapper[4858]: I0202 17:45:01.127473 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500905-wpg7p" event={"ID":"a9d12a7d-cca1-4d5d-8513-b01410bee517","Type":"ContainerStarted","Data":"a5f13e9f490788e679ceb1dbde7fe3ebfa0973a4f59b0739d37d80b5090a8cab"} Feb 02 17:45:01 crc kubenswrapper[4858]: I0202 17:45:01.153486 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29500905-wpg7p" podStartSLOduration=1.153445276 podStartE2EDuration="1.153445276s" podCreationTimestamp="2026-02-02 17:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 17:45:01.14514257 +0000 UTC m=+1802.297557835" watchObservedRunningTime="2026-02-02 17:45:01.153445276 +0000 UTC m=+1802.305860541" Feb 02 17:45:02 crc kubenswrapper[4858]: I0202 17:45:02.137439 4858 generic.go:334] "Generic (PLEG): container finished" podID="a9d12a7d-cca1-4d5d-8513-b01410bee517" containerID="ab4ab2df910a70063fd2c2169e7d4e96acae058f35272565b151e4f19db24406" exitCode=0 Feb 02 17:45:02 crc kubenswrapper[4858]: I0202 17:45:02.137716 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500905-wpg7p" event={"ID":"a9d12a7d-cca1-4d5d-8513-b01410bee517","Type":"ContainerDied","Data":"ab4ab2df910a70063fd2c2169e7d4e96acae058f35272565b151e4f19db24406"} Feb 02 17:45:03 crc kubenswrapper[4858]: I0202 17:45:03.401155 4858 scope.go:117] "RemoveContainer" containerID="74530cf9b9be874cadd1d92f12bf70a28231d482712b7e73c15b0838788189fc" Feb 02 17:45:03 crc kubenswrapper[4858]: E0202 17:45:03.401748 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:45:03 crc kubenswrapper[4858]: I0202 17:45:03.482703 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500905-wpg7p" Feb 02 17:45:03 crc kubenswrapper[4858]: I0202 17:45:03.627608 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25g5b\" (UniqueName: \"kubernetes.io/projected/a9d12a7d-cca1-4d5d-8513-b01410bee517-kube-api-access-25g5b\") pod \"a9d12a7d-cca1-4d5d-8513-b01410bee517\" (UID: \"a9d12a7d-cca1-4d5d-8513-b01410bee517\") " Feb 02 17:45:03 crc kubenswrapper[4858]: I0202 17:45:03.627917 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a9d12a7d-cca1-4d5d-8513-b01410bee517-secret-volume\") pod \"a9d12a7d-cca1-4d5d-8513-b01410bee517\" (UID: \"a9d12a7d-cca1-4d5d-8513-b01410bee517\") " Feb 02 17:45:03 crc kubenswrapper[4858]: I0202 17:45:03.627966 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a9d12a7d-cca1-4d5d-8513-b01410bee517-config-volume\") pod \"a9d12a7d-cca1-4d5d-8513-b01410bee517\" (UID: \"a9d12a7d-cca1-4d5d-8513-b01410bee517\") " Feb 02 17:45:03 crc kubenswrapper[4858]: I0202 17:45:03.628698 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9d12a7d-cca1-4d5d-8513-b01410bee517-config-volume" (OuterVolumeSpecName: "config-volume") pod "a9d12a7d-cca1-4d5d-8513-b01410bee517" (UID: "a9d12a7d-cca1-4d5d-8513-b01410bee517"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:45:03 crc kubenswrapper[4858]: I0202 17:45:03.628961 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a9d12a7d-cca1-4d5d-8513-b01410bee517-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 17:45:03 crc kubenswrapper[4858]: I0202 17:45:03.637340 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9d12a7d-cca1-4d5d-8513-b01410bee517-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a9d12a7d-cca1-4d5d-8513-b01410bee517" (UID: "a9d12a7d-cca1-4d5d-8513-b01410bee517"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:45:03 crc kubenswrapper[4858]: I0202 17:45:03.637451 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9d12a7d-cca1-4d5d-8513-b01410bee517-kube-api-access-25g5b" (OuterVolumeSpecName: "kube-api-access-25g5b") pod "a9d12a7d-cca1-4d5d-8513-b01410bee517" (UID: "a9d12a7d-cca1-4d5d-8513-b01410bee517"). InnerVolumeSpecName "kube-api-access-25g5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:45:03 crc kubenswrapper[4858]: I0202 17:45:03.730238 4858 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a9d12a7d-cca1-4d5d-8513-b01410bee517-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 17:45:03 crc kubenswrapper[4858]: I0202 17:45:03.730272 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25g5b\" (UniqueName: \"kubernetes.io/projected/a9d12a7d-cca1-4d5d-8513-b01410bee517-kube-api-access-25g5b\") on node \"crc\" DevicePath \"\"" Feb 02 17:45:04 crc kubenswrapper[4858]: I0202 17:45:04.158382 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500905-wpg7p" event={"ID":"a9d12a7d-cca1-4d5d-8513-b01410bee517","Type":"ContainerDied","Data":"a5f13e9f490788e679ceb1dbde7fe3ebfa0973a4f59b0739d37d80b5090a8cab"} Feb 02 17:45:04 crc kubenswrapper[4858]: I0202 17:45:04.158445 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5f13e9f490788e679ceb1dbde7fe3ebfa0973a4f59b0739d37d80b5090a8cab" Feb 02 17:45:04 crc kubenswrapper[4858]: I0202 17:45:04.158485 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500905-wpg7p" Feb 02 17:45:11 crc kubenswrapper[4858]: I0202 17:45:11.031891 4858 scope.go:117] "RemoveContainer" containerID="3a8912ade1b9684286e5dbc3736a2395cf89c9ba6ced6715212b5a8427155b2a" Feb 02 17:45:11 crc kubenswrapper[4858]: I0202 17:45:11.222186 4858 generic.go:334] "Generic (PLEG): container finished" podID="c9df746d-9cca-49c2-88e3-8be52b5e9531" containerID="cef23906d74d168655732b23cad6bb782378fcdde9e8ae80f4e607d2aa1ac98b" exitCode=0 Feb 02 17:45:11 crc kubenswrapper[4858]: I0202 17:45:11.222276 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" event={"ID":"c9df746d-9cca-49c2-88e3-8be52b5e9531","Type":"ContainerDied","Data":"cef23906d74d168655732b23cad6bb782378fcdde9e8ae80f4e607d2aa1ac98b"} Feb 02 17:45:12 crc kubenswrapper[4858]: I0202 17:45:12.662240 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:45:12 crc kubenswrapper[4858]: I0202 17:45:12.796905 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-neutron-metadata-combined-ca-bundle\") pod \"c9df746d-9cca-49c2-88e3-8be52b5e9531\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " Feb 02 17:45:12 crc kubenswrapper[4858]: I0202 17:45:12.796995 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c9df746d-9cca-49c2-88e3-8be52b5e9531-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"c9df746d-9cca-49c2-88e3-8be52b5e9531\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " Feb 02 17:45:12 crc kubenswrapper[4858]: I0202 17:45:12.797063 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-ssh-key-openstack-edpm-ipam\") pod \"c9df746d-9cca-49c2-88e3-8be52b5e9531\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " Feb 02 17:45:12 crc kubenswrapper[4858]: I0202 17:45:12.797085 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c9df746d-9cca-49c2-88e3-8be52b5e9531-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"c9df746d-9cca-49c2-88e3-8be52b5e9531\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " Feb 02 17:45:12 crc kubenswrapper[4858]: I0202 17:45:12.797135 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c9df746d-9cca-49c2-88e3-8be52b5e9531-openstack-edpm-ipam-ovn-default-certs-0\") pod \"c9df746d-9cca-49c2-88e3-8be52b5e9531\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " Feb 02 17:45:12 crc kubenswrapper[4858]: I0202 17:45:12.797163 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-ovn-combined-ca-bundle\") pod \"c9df746d-9cca-49c2-88e3-8be52b5e9531\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " Feb 02 17:45:12 crc kubenswrapper[4858]: I0202 17:45:12.797232 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-bootstrap-combined-ca-bundle\") pod \"c9df746d-9cca-49c2-88e3-8be52b5e9531\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " Feb 02 17:45:12 crc kubenswrapper[4858]: I0202 17:45:12.797277 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-inventory\") pod \"c9df746d-9cca-49c2-88e3-8be52b5e9531\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " Feb 02 17:45:12 crc kubenswrapper[4858]: I0202 17:45:12.797312 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-nova-combined-ca-bundle\") pod \"c9df746d-9cca-49c2-88e3-8be52b5e9531\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " Feb 02 17:45:12 crc kubenswrapper[4858]: I0202 17:45:12.797338 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-telemetry-combined-ca-bundle\") pod \"c9df746d-9cca-49c2-88e3-8be52b5e9531\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " Feb 02 17:45:12 crc kubenswrapper[4858]: I0202 17:45:12.797356 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-repo-setup-combined-ca-bundle\") pod \"c9df746d-9cca-49c2-88e3-8be52b5e9531\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " Feb 02 17:45:12 crc kubenswrapper[4858]: I0202 17:45:12.797386 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-libvirt-combined-ca-bundle\") pod \"c9df746d-9cca-49c2-88e3-8be52b5e9531\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " Feb 02 17:45:12 crc kubenswrapper[4858]: I0202 17:45:12.797434 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c9df746d-9cca-49c2-88e3-8be52b5e9531-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"c9df746d-9cca-49c2-88e3-8be52b5e9531\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " Feb 02 17:45:12 crc kubenswrapper[4858]: I0202 17:45:12.797462 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlpcs\" (UniqueName: \"kubernetes.io/projected/c9df746d-9cca-49c2-88e3-8be52b5e9531-kube-api-access-zlpcs\") pod \"c9df746d-9cca-49c2-88e3-8be52b5e9531\" (UID: \"c9df746d-9cca-49c2-88e3-8be52b5e9531\") " Feb 02 17:45:12 crc kubenswrapper[4858]: I0202 17:45:12.804533 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9df746d-9cca-49c2-88e3-8be52b5e9531-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "c9df746d-9cca-49c2-88e3-8be52b5e9531" (UID: "c9df746d-9cca-49c2-88e3-8be52b5e9531"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:45:12 crc kubenswrapper[4858]: I0202 17:45:12.805107 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9df746d-9cca-49c2-88e3-8be52b5e9531-kube-api-access-zlpcs" (OuterVolumeSpecName: "kube-api-access-zlpcs") pod "c9df746d-9cca-49c2-88e3-8be52b5e9531" (UID: "c9df746d-9cca-49c2-88e3-8be52b5e9531"). InnerVolumeSpecName "kube-api-access-zlpcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:45:12 crc kubenswrapper[4858]: I0202 17:45:12.805166 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9df746d-9cca-49c2-88e3-8be52b5e9531-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "c9df746d-9cca-49c2-88e3-8be52b5e9531" (UID: "c9df746d-9cca-49c2-88e3-8be52b5e9531"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:45:12 crc kubenswrapper[4858]: I0202 17:45:12.805306 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "c9df746d-9cca-49c2-88e3-8be52b5e9531" (UID: "c9df746d-9cca-49c2-88e3-8be52b5e9531"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:45:12 crc kubenswrapper[4858]: I0202 17:45:12.805909 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9df746d-9cca-49c2-88e3-8be52b5e9531-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "c9df746d-9cca-49c2-88e3-8be52b5e9531" (UID: "c9df746d-9cca-49c2-88e3-8be52b5e9531"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:45:12 crc kubenswrapper[4858]: I0202 17:45:12.807172 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "c9df746d-9cca-49c2-88e3-8be52b5e9531" (UID: "c9df746d-9cca-49c2-88e3-8be52b5e9531"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:45:12 crc kubenswrapper[4858]: I0202 17:45:12.807483 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9df746d-9cca-49c2-88e3-8be52b5e9531-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "c9df746d-9cca-49c2-88e3-8be52b5e9531" (UID: "c9df746d-9cca-49c2-88e3-8be52b5e9531"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:45:12 crc kubenswrapper[4858]: I0202 17:45:12.807915 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "c9df746d-9cca-49c2-88e3-8be52b5e9531" (UID: "c9df746d-9cca-49c2-88e3-8be52b5e9531"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:45:12 crc kubenswrapper[4858]: I0202 17:45:12.808584 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "c9df746d-9cca-49c2-88e3-8be52b5e9531" (UID: "c9df746d-9cca-49c2-88e3-8be52b5e9531"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:45:12 crc kubenswrapper[4858]: I0202 17:45:12.809351 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "c9df746d-9cca-49c2-88e3-8be52b5e9531" (UID: "c9df746d-9cca-49c2-88e3-8be52b5e9531"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:45:12 crc kubenswrapper[4858]: I0202 17:45:12.809389 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "c9df746d-9cca-49c2-88e3-8be52b5e9531" (UID: "c9df746d-9cca-49c2-88e3-8be52b5e9531"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:45:12 crc kubenswrapper[4858]: I0202 17:45:12.812725 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "c9df746d-9cca-49c2-88e3-8be52b5e9531" (UID: "c9df746d-9cca-49c2-88e3-8be52b5e9531"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:45:12 crc kubenswrapper[4858]: I0202 17:45:12.835129 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-inventory" (OuterVolumeSpecName: "inventory") pod "c9df746d-9cca-49c2-88e3-8be52b5e9531" (UID: "c9df746d-9cca-49c2-88e3-8be52b5e9531"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:45:12 crc kubenswrapper[4858]: I0202 17:45:12.836533 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c9df746d-9cca-49c2-88e3-8be52b5e9531" (UID: "c9df746d-9cca-49c2-88e3-8be52b5e9531"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:45:12 crc kubenswrapper[4858]: I0202 17:45:12.900676 4858 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:45:12 crc kubenswrapper[4858]: I0202 17:45:12.900728 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 17:45:12 crc kubenswrapper[4858]: I0202 17:45:12.900743 4858 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:45:12 crc kubenswrapper[4858]: I0202 17:45:12.900753 4858 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:45:12 crc kubenswrapper[4858]: I0202 17:45:12.900764 4858 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:45:12 crc kubenswrapper[4858]: I0202 17:45:12.900775 4858 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:45:12 crc kubenswrapper[4858]: I0202 17:45:12.900794 4858 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c9df746d-9cca-49c2-88e3-8be52b5e9531-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 02 17:45:12 crc kubenswrapper[4858]: I0202 17:45:12.900809 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlpcs\" (UniqueName: \"kubernetes.io/projected/c9df746d-9cca-49c2-88e3-8be52b5e9531-kube-api-access-zlpcs\") on node \"crc\" DevicePath \"\"" Feb 02 17:45:12 crc kubenswrapper[4858]: I0202 17:45:12.900820 4858 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:45:12 crc kubenswrapper[4858]: I0202 17:45:12.900832 4858 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c9df746d-9cca-49c2-88e3-8be52b5e9531-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 02 17:45:12 crc kubenswrapper[4858]: I0202 17:45:12.900844 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 17:45:12 crc kubenswrapper[4858]: I0202 17:45:12.900858 4858 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c9df746d-9cca-49c2-88e3-8be52b5e9531-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 02 17:45:12 crc kubenswrapper[4858]: I0202 17:45:12.900868 4858 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c9df746d-9cca-49c2-88e3-8be52b5e9531-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 02 17:45:12 crc kubenswrapper[4858]: I0202 17:45:12.900879 4858 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9df746d-9cca-49c2-88e3-8be52b5e9531-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:45:13 crc kubenswrapper[4858]: I0202 17:45:13.247229 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" event={"ID":"c9df746d-9cca-49c2-88e3-8be52b5e9531","Type":"ContainerDied","Data":"235e29db1ad6fe723eca64e7eeec8480693b5ad039ab9e46040bbf709225fb10"} Feb 02 17:45:13 crc kubenswrapper[4858]: I0202 17:45:13.247298 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="235e29db1ad6fe723eca64e7eeec8480693b5ad039ab9e46040bbf709225fb10" Feb 02 17:45:13 crc kubenswrapper[4858]: I0202 17:45:13.247390 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4gs56" Feb 02 17:45:13 crc kubenswrapper[4858]: I0202 17:45:13.340547 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-scsrh"] Feb 02 17:45:13 crc kubenswrapper[4858]: E0202 17:45:13.341030 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9df746d-9cca-49c2-88e3-8be52b5e9531" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 02 17:45:13 crc kubenswrapper[4858]: I0202 17:45:13.341049 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9df746d-9cca-49c2-88e3-8be52b5e9531" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 02 17:45:13 crc kubenswrapper[4858]: E0202 17:45:13.341073 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9d12a7d-cca1-4d5d-8513-b01410bee517" containerName="collect-profiles" Feb 02 17:45:13 crc kubenswrapper[4858]: I0202 17:45:13.341082 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9d12a7d-cca1-4d5d-8513-b01410bee517" containerName="collect-profiles" Feb 02 17:45:13 crc kubenswrapper[4858]: I0202 17:45:13.341307 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9d12a7d-cca1-4d5d-8513-b01410bee517" containerName="collect-profiles" Feb 02 17:45:13 crc kubenswrapper[4858]: I0202 17:45:13.341345 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9df746d-9cca-49c2-88e3-8be52b5e9531" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 02 17:45:13 crc kubenswrapper[4858]: I0202 17:45:13.342087 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-scsrh" Feb 02 17:45:13 crc kubenswrapper[4858]: I0202 17:45:13.348502 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 17:45:13 crc kubenswrapper[4858]: I0202 17:45:13.348600 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 17:45:13 crc kubenswrapper[4858]: I0202 17:45:13.348790 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q7l94" Feb 02 17:45:13 crc kubenswrapper[4858]: I0202 17:45:13.348798 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 02 17:45:13 crc kubenswrapper[4858]: I0202 17:45:13.350495 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 17:45:13 crc kubenswrapper[4858]: I0202 17:45:13.353090 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-scsrh"] Feb 02 17:45:13 crc kubenswrapper[4858]: I0202 17:45:13.411784 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d14bee68-7779-4c77-916e-a58d2a871918-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-scsrh\" (UID: \"d14bee68-7779-4c77-916e-a58d2a871918\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-scsrh" Feb 02 17:45:13 crc kubenswrapper[4858]: I0202 17:45:13.412291 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d14bee68-7779-4c77-916e-a58d2a871918-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-scsrh\" (UID: \"d14bee68-7779-4c77-916e-a58d2a871918\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-scsrh" Feb 02 17:45:13 crc kubenswrapper[4858]: I0202 17:45:13.412356 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxb4b\" (UniqueName: \"kubernetes.io/projected/d14bee68-7779-4c77-916e-a58d2a871918-kube-api-access-xxb4b\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-scsrh\" (UID: \"d14bee68-7779-4c77-916e-a58d2a871918\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-scsrh" Feb 02 17:45:13 crc kubenswrapper[4858]: I0202 17:45:13.412502 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d14bee68-7779-4c77-916e-a58d2a871918-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-scsrh\" (UID: \"d14bee68-7779-4c77-916e-a58d2a871918\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-scsrh" Feb 02 17:45:13 crc kubenswrapper[4858]: I0202 17:45:13.412611 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d14bee68-7779-4c77-916e-a58d2a871918-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-scsrh\" (UID: \"d14bee68-7779-4c77-916e-a58d2a871918\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-scsrh" Feb 02 17:45:13 crc kubenswrapper[4858]: I0202 17:45:13.514861 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d14bee68-7779-4c77-916e-a58d2a871918-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-scsrh\" (UID: \"d14bee68-7779-4c77-916e-a58d2a871918\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-scsrh" Feb 02 17:45:13 crc kubenswrapper[4858]: I0202 17:45:13.515027 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d14bee68-7779-4c77-916e-a58d2a871918-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-scsrh\" (UID: \"d14bee68-7779-4c77-916e-a58d2a871918\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-scsrh" Feb 02 17:45:13 crc kubenswrapper[4858]: I0202 17:45:13.515167 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d14bee68-7779-4c77-916e-a58d2a871918-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-scsrh\" (UID: \"d14bee68-7779-4c77-916e-a58d2a871918\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-scsrh" Feb 02 17:45:13 crc kubenswrapper[4858]: I0202 17:45:13.515269 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxb4b\" (UniqueName: \"kubernetes.io/projected/d14bee68-7779-4c77-916e-a58d2a871918-kube-api-access-xxb4b\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-scsrh\" (UID: \"d14bee68-7779-4c77-916e-a58d2a871918\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-scsrh" Feb 02 17:45:13 crc kubenswrapper[4858]: I0202 17:45:13.515424 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d14bee68-7779-4c77-916e-a58d2a871918-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-scsrh\" (UID: \"d14bee68-7779-4c77-916e-a58d2a871918\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-scsrh" Feb 02 17:45:13 crc kubenswrapper[4858]: I0202 17:45:13.515921 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d14bee68-7779-4c77-916e-a58d2a871918-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-scsrh\" (UID: \"d14bee68-7779-4c77-916e-a58d2a871918\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-scsrh" Feb 02 17:45:13 crc kubenswrapper[4858]: I0202 17:45:13.519991 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d14bee68-7779-4c77-916e-a58d2a871918-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-scsrh\" (UID: \"d14bee68-7779-4c77-916e-a58d2a871918\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-scsrh" Feb 02 17:45:13 crc kubenswrapper[4858]: I0202 17:45:13.520989 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d14bee68-7779-4c77-916e-a58d2a871918-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-scsrh\" (UID: \"d14bee68-7779-4c77-916e-a58d2a871918\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-scsrh" Feb 02 17:45:13 crc kubenswrapper[4858]: I0202 17:45:13.524612 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d14bee68-7779-4c77-916e-a58d2a871918-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-scsrh\" (UID: \"d14bee68-7779-4c77-916e-a58d2a871918\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-scsrh" Feb 02 17:45:13 crc kubenswrapper[4858]: I0202 17:45:13.533183 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxb4b\" (UniqueName: \"kubernetes.io/projected/d14bee68-7779-4c77-916e-a58d2a871918-kube-api-access-xxb4b\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-scsrh\" (UID: \"d14bee68-7779-4c77-916e-a58d2a871918\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-scsrh" Feb 02 17:45:13 crc kubenswrapper[4858]: I0202 17:45:13.662918 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-scsrh" Feb 02 17:45:14 crc kubenswrapper[4858]: I0202 17:45:14.201313 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-scsrh"] Feb 02 17:45:14 crc kubenswrapper[4858]: I0202 17:45:14.258161 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-scsrh" event={"ID":"d14bee68-7779-4c77-916e-a58d2a871918","Type":"ContainerStarted","Data":"3edafe0e2ef0677031a7efca2d479aab3dfc47f37c959778539bc1a4a86aa232"} Feb 02 17:45:14 crc kubenswrapper[4858]: I0202 17:45:14.404176 4858 scope.go:117] "RemoveContainer" containerID="74530cf9b9be874cadd1d92f12bf70a28231d482712b7e73c15b0838788189fc" Feb 02 17:45:14 crc kubenswrapper[4858]: E0202 17:45:14.404541 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:45:15 crc kubenswrapper[4858]: I0202 17:45:15.267659 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-scsrh" event={"ID":"d14bee68-7779-4c77-916e-a58d2a871918","Type":"ContainerStarted","Data":"e483e6f37ab7b073eddc12330b8276500b9c9c0ce0c9327fe05aff205558afb7"} Feb 02 17:45:15 crc kubenswrapper[4858]: I0202 17:45:15.289542 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-scsrh" podStartSLOduration=1.852364643 podStartE2EDuration="2.289518787s" podCreationTimestamp="2026-02-02 17:45:13 +0000 UTC" firstStartedPulling="2026-02-02 17:45:14.214075617 +0000 UTC m=+1815.366490882" lastFinishedPulling="2026-02-02 17:45:14.651229761 +0000 UTC m=+1815.803645026" observedRunningTime="2026-02-02 17:45:15.283676571 +0000 UTC m=+1816.436091836" watchObservedRunningTime="2026-02-02 17:45:15.289518787 +0000 UTC m=+1816.441934052" Feb 02 17:45:25 crc kubenswrapper[4858]: I0202 17:45:25.401583 4858 scope.go:117] "RemoveContainer" containerID="74530cf9b9be874cadd1d92f12bf70a28231d482712b7e73c15b0838788189fc" Feb 02 17:45:25 crc kubenswrapper[4858]: E0202 17:45:25.402620 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:45:39 crc kubenswrapper[4858]: I0202 17:45:39.399935 4858 scope.go:117] "RemoveContainer" containerID="74530cf9b9be874cadd1d92f12bf70a28231d482712b7e73c15b0838788189fc" Feb 02 17:45:39 crc kubenswrapper[4858]: E0202 17:45:39.403635 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:45:52 crc kubenswrapper[4858]: I0202 17:45:52.401704 4858 scope.go:117] "RemoveContainer" containerID="74530cf9b9be874cadd1d92f12bf70a28231d482712b7e73c15b0838788189fc" Feb 02 17:45:52 crc kubenswrapper[4858]: E0202 17:45:52.402770 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:46:06 crc kubenswrapper[4858]: I0202 17:46:06.402132 4858 scope.go:117] "RemoveContainer" containerID="74530cf9b9be874cadd1d92f12bf70a28231d482712b7e73c15b0838788189fc" Feb 02 17:46:06 crc kubenswrapper[4858]: E0202 17:46:06.403333 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:46:10 crc kubenswrapper[4858]: I0202 17:46:10.766725 4858 generic.go:334] "Generic (PLEG): container finished" podID="d14bee68-7779-4c77-916e-a58d2a871918" containerID="e483e6f37ab7b073eddc12330b8276500b9c9c0ce0c9327fe05aff205558afb7" exitCode=0 Feb 02 17:46:10 crc kubenswrapper[4858]: I0202 17:46:10.766815 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-scsrh" event={"ID":"d14bee68-7779-4c77-916e-a58d2a871918","Type":"ContainerDied","Data":"e483e6f37ab7b073eddc12330b8276500b9c9c0ce0c9327fe05aff205558afb7"} Feb 02 17:46:12 crc kubenswrapper[4858]: I0202 17:46:12.225518 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-scsrh" Feb 02 17:46:12 crc kubenswrapper[4858]: I0202 17:46:12.360900 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxb4b\" (UniqueName: \"kubernetes.io/projected/d14bee68-7779-4c77-916e-a58d2a871918-kube-api-access-xxb4b\") pod \"d14bee68-7779-4c77-916e-a58d2a871918\" (UID: \"d14bee68-7779-4c77-916e-a58d2a871918\") " Feb 02 17:46:12 crc kubenswrapper[4858]: I0202 17:46:12.361069 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d14bee68-7779-4c77-916e-a58d2a871918-ovn-combined-ca-bundle\") pod \"d14bee68-7779-4c77-916e-a58d2a871918\" (UID: \"d14bee68-7779-4c77-916e-a58d2a871918\") " Feb 02 17:46:12 crc kubenswrapper[4858]: I0202 17:46:12.362118 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d14bee68-7779-4c77-916e-a58d2a871918-ovncontroller-config-0\") pod \"d14bee68-7779-4c77-916e-a58d2a871918\" (UID: \"d14bee68-7779-4c77-916e-a58d2a871918\") " Feb 02 17:46:12 crc kubenswrapper[4858]: I0202 17:46:12.362278 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d14bee68-7779-4c77-916e-a58d2a871918-inventory\") pod \"d14bee68-7779-4c77-916e-a58d2a871918\" (UID: \"d14bee68-7779-4c77-916e-a58d2a871918\") " Feb 02 17:46:12 crc kubenswrapper[4858]: I0202 17:46:12.362331 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d14bee68-7779-4c77-916e-a58d2a871918-ssh-key-openstack-edpm-ipam\") pod \"d14bee68-7779-4c77-916e-a58d2a871918\" (UID: \"d14bee68-7779-4c77-916e-a58d2a871918\") " Feb 02 17:46:12 crc kubenswrapper[4858]: I0202 17:46:12.368916 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d14bee68-7779-4c77-916e-a58d2a871918-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "d14bee68-7779-4c77-916e-a58d2a871918" (UID: "d14bee68-7779-4c77-916e-a58d2a871918"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:46:12 crc kubenswrapper[4858]: I0202 17:46:12.369553 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d14bee68-7779-4c77-916e-a58d2a871918-kube-api-access-xxb4b" (OuterVolumeSpecName: "kube-api-access-xxb4b") pod "d14bee68-7779-4c77-916e-a58d2a871918" (UID: "d14bee68-7779-4c77-916e-a58d2a871918"). InnerVolumeSpecName "kube-api-access-xxb4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:46:12 crc kubenswrapper[4858]: I0202 17:46:12.391360 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d14bee68-7779-4c77-916e-a58d2a871918-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d14bee68-7779-4c77-916e-a58d2a871918" (UID: "d14bee68-7779-4c77-916e-a58d2a871918"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:46:12 crc kubenswrapper[4858]: I0202 17:46:12.393028 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d14bee68-7779-4c77-916e-a58d2a871918-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "d14bee68-7779-4c77-916e-a58d2a871918" (UID: "d14bee68-7779-4c77-916e-a58d2a871918"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:46:12 crc kubenswrapper[4858]: I0202 17:46:12.393770 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d14bee68-7779-4c77-916e-a58d2a871918-inventory" (OuterVolumeSpecName: "inventory") pod "d14bee68-7779-4c77-916e-a58d2a871918" (UID: "d14bee68-7779-4c77-916e-a58d2a871918"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:46:12 crc kubenswrapper[4858]: I0202 17:46:12.464817 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxb4b\" (UniqueName: \"kubernetes.io/projected/d14bee68-7779-4c77-916e-a58d2a871918-kube-api-access-xxb4b\") on node \"crc\" DevicePath \"\"" Feb 02 17:46:12 crc kubenswrapper[4858]: I0202 17:46:12.465012 4858 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d14bee68-7779-4c77-916e-a58d2a871918-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:46:12 crc kubenswrapper[4858]: I0202 17:46:12.465084 4858 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/d14bee68-7779-4c77-916e-a58d2a871918-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 02 17:46:12 crc kubenswrapper[4858]: I0202 17:46:12.465155 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d14bee68-7779-4c77-916e-a58d2a871918-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 17:46:12 crc kubenswrapper[4858]: I0202 17:46:12.465213 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d14bee68-7779-4c77-916e-a58d2a871918-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 17:46:12 crc kubenswrapper[4858]: I0202 17:46:12.789945 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-scsrh" event={"ID":"d14bee68-7779-4c77-916e-a58d2a871918","Type":"ContainerDied","Data":"3edafe0e2ef0677031a7efca2d479aab3dfc47f37c959778539bc1a4a86aa232"} Feb 02 17:46:12 crc kubenswrapper[4858]: I0202 17:46:12.790029 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3edafe0e2ef0677031a7efca2d479aab3dfc47f37c959778539bc1a4a86aa232" Feb 02 17:46:12 crc kubenswrapper[4858]: I0202 17:46:12.790611 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-scsrh" Feb 02 17:46:12 crc kubenswrapper[4858]: I0202 17:46:12.966599 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2"] Feb 02 17:46:12 crc kubenswrapper[4858]: E0202 17:46:12.967065 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d14bee68-7779-4c77-916e-a58d2a871918" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 02 17:46:12 crc kubenswrapper[4858]: I0202 17:46:12.967088 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d14bee68-7779-4c77-916e-a58d2a871918" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 02 17:46:12 crc kubenswrapper[4858]: I0202 17:46:12.967341 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d14bee68-7779-4c77-916e-a58d2a871918" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 02 17:46:12 crc kubenswrapper[4858]: I0202 17:46:12.968115 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2" Feb 02 17:46:12 crc kubenswrapper[4858]: I0202 17:46:12.971172 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 17:46:12 crc kubenswrapper[4858]: I0202 17:46:12.971912 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 02 17:46:12 crc kubenswrapper[4858]: I0202 17:46:12.973525 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/888cd580-fe65-443a-ac8f-351364f34183-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2\" (UID: \"888cd580-fe65-443a-ac8f-351364f34183\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2" Feb 02 17:46:12 crc kubenswrapper[4858]: I0202 17:46:12.973573 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vz4n\" (UniqueName: \"kubernetes.io/projected/888cd580-fe65-443a-ac8f-351364f34183-kube-api-access-6vz4n\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2\" (UID: \"888cd580-fe65-443a-ac8f-351364f34183\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2" Feb 02 17:46:12 crc kubenswrapper[4858]: I0202 17:46:12.973639 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/888cd580-fe65-443a-ac8f-351364f34183-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2\" (UID: \"888cd580-fe65-443a-ac8f-351364f34183\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2" Feb 02 17:46:12 crc kubenswrapper[4858]: I0202 17:46:12.973661 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/888cd580-fe65-443a-ac8f-351364f34183-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2\" (UID: \"888cd580-fe65-443a-ac8f-351364f34183\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2" Feb 02 17:46:12 crc kubenswrapper[4858]: I0202 17:46:12.973913 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/888cd580-fe65-443a-ac8f-351364f34183-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2\" (UID: \"888cd580-fe65-443a-ac8f-351364f34183\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2" Feb 02 17:46:12 crc kubenswrapper[4858]: I0202 17:46:12.974315 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 17:46:12 crc kubenswrapper[4858]: I0202 17:46:12.974432 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/888cd580-fe65-443a-ac8f-351364f34183-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2\" (UID: \"888cd580-fe65-443a-ac8f-351364f34183\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2" Feb 02 17:46:12 crc kubenswrapper[4858]: I0202 17:46:12.974755 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q7l94" Feb 02 17:46:12 crc kubenswrapper[4858]: I0202 17:46:12.975217 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 17:46:12 crc kubenswrapper[4858]: I0202 17:46:12.975597 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 02 17:46:12 crc kubenswrapper[4858]: I0202 17:46:12.978709 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2"] Feb 02 17:46:13 crc kubenswrapper[4858]: I0202 17:46:13.077059 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/888cd580-fe65-443a-ac8f-351364f34183-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2\" (UID: \"888cd580-fe65-443a-ac8f-351364f34183\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2" Feb 02 17:46:13 crc kubenswrapper[4858]: I0202 17:46:13.077133 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/888cd580-fe65-443a-ac8f-351364f34183-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2\" (UID: \"888cd580-fe65-443a-ac8f-351364f34183\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2" Feb 02 17:46:13 crc kubenswrapper[4858]: I0202 17:46:13.077180 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vz4n\" (UniqueName: \"kubernetes.io/projected/888cd580-fe65-443a-ac8f-351364f34183-kube-api-access-6vz4n\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2\" (UID: \"888cd580-fe65-443a-ac8f-351364f34183\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2" Feb 02 17:46:13 crc kubenswrapper[4858]: I0202 17:46:13.077247 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/888cd580-fe65-443a-ac8f-351364f34183-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2\" (UID: \"888cd580-fe65-443a-ac8f-351364f34183\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2" Feb 02 17:46:13 crc kubenswrapper[4858]: I0202 17:46:13.077285 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/888cd580-fe65-443a-ac8f-351364f34183-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2\" (UID: \"888cd580-fe65-443a-ac8f-351364f34183\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2" Feb 02 17:46:13 crc kubenswrapper[4858]: I0202 17:46:13.077354 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/888cd580-fe65-443a-ac8f-351364f34183-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2\" (UID: \"888cd580-fe65-443a-ac8f-351364f34183\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2" Feb 02 17:46:13 crc kubenswrapper[4858]: I0202 17:46:13.084804 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/888cd580-fe65-443a-ac8f-351364f34183-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2\" (UID: \"888cd580-fe65-443a-ac8f-351364f34183\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2" Feb 02 17:46:13 crc kubenswrapper[4858]: I0202 17:46:13.085016 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/888cd580-fe65-443a-ac8f-351364f34183-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2\" (UID: \"888cd580-fe65-443a-ac8f-351364f34183\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2" Feb 02 17:46:13 crc kubenswrapper[4858]: I0202 17:46:13.086253 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/888cd580-fe65-443a-ac8f-351364f34183-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2\" (UID: \"888cd580-fe65-443a-ac8f-351364f34183\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2" Feb 02 17:46:13 crc kubenswrapper[4858]: I0202 17:46:13.091731 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/888cd580-fe65-443a-ac8f-351364f34183-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2\" (UID: \"888cd580-fe65-443a-ac8f-351364f34183\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2" Feb 02 17:46:13 crc kubenswrapper[4858]: I0202 17:46:13.100084 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/888cd580-fe65-443a-ac8f-351364f34183-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2\" (UID: \"888cd580-fe65-443a-ac8f-351364f34183\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2" Feb 02 17:46:13 crc kubenswrapper[4858]: I0202 17:46:13.100821 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vz4n\" (UniqueName: \"kubernetes.io/projected/888cd580-fe65-443a-ac8f-351364f34183-kube-api-access-6vz4n\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2\" (UID: \"888cd580-fe65-443a-ac8f-351364f34183\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2" Feb 02 17:46:13 crc kubenswrapper[4858]: I0202 17:46:13.336209 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2" Feb 02 17:46:13 crc kubenswrapper[4858]: I0202 17:46:13.889409 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2"] Feb 02 17:46:13 crc kubenswrapper[4858]: I0202 17:46:13.897662 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 17:46:14 crc kubenswrapper[4858]: I0202 17:46:14.817532 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2" event={"ID":"888cd580-fe65-443a-ac8f-351364f34183","Type":"ContainerStarted","Data":"d0c78959ba6765179ceb9c2db2b7bd9bdc371f3fadb74507325f220ef73cfb02"} Feb 02 17:46:15 crc kubenswrapper[4858]: I0202 17:46:15.832483 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2" event={"ID":"888cd580-fe65-443a-ac8f-351364f34183","Type":"ContainerStarted","Data":"72e1a3faaac29528b7861c46bb4d8b658771037579b9918ea4b6fdfcb58becb1"} Feb 02 17:46:15 crc kubenswrapper[4858]: I0202 17:46:15.872699 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2" podStartSLOduration=3.164584355 podStartE2EDuration="3.872675084s" podCreationTimestamp="2026-02-02 17:46:12 +0000 UTC" firstStartedPulling="2026-02-02 17:46:13.897318108 +0000 UTC m=+1875.049733373" lastFinishedPulling="2026-02-02 17:46:14.605408827 +0000 UTC m=+1875.757824102" observedRunningTime="2026-02-02 17:46:15.856083863 +0000 UTC m=+1877.008499168" watchObservedRunningTime="2026-02-02 17:46:15.872675084 +0000 UTC m=+1877.025090359" Feb 02 17:46:21 crc kubenswrapper[4858]: I0202 17:46:21.400503 4858 scope.go:117] "RemoveContainer" containerID="74530cf9b9be874cadd1d92f12bf70a28231d482712b7e73c15b0838788189fc" Feb 02 17:46:21 crc kubenswrapper[4858]: E0202 17:46:21.401314 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:46:35 crc kubenswrapper[4858]: I0202 17:46:35.400854 4858 scope.go:117] "RemoveContainer" containerID="74530cf9b9be874cadd1d92f12bf70a28231d482712b7e73c15b0838788189fc" Feb 02 17:46:35 crc kubenswrapper[4858]: E0202 17:46:35.401553 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:46:47 crc kubenswrapper[4858]: I0202 17:46:47.400346 4858 scope.go:117] "RemoveContainer" containerID="74530cf9b9be874cadd1d92f12bf70a28231d482712b7e73c15b0838788189fc" Feb 02 17:46:47 crc kubenswrapper[4858]: E0202 17:46:47.402515 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:47:00 crc kubenswrapper[4858]: I0202 17:47:00.408450 4858 scope.go:117] "RemoveContainer" containerID="74530cf9b9be874cadd1d92f12bf70a28231d482712b7e73c15b0838788189fc" Feb 02 17:47:00 crc kubenswrapper[4858]: E0202 17:47:00.409191 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:47:01 crc kubenswrapper[4858]: I0202 17:47:01.302141 4858 generic.go:334] "Generic (PLEG): container finished" podID="888cd580-fe65-443a-ac8f-351364f34183" containerID="72e1a3faaac29528b7861c46bb4d8b658771037579b9918ea4b6fdfcb58becb1" exitCode=0 Feb 02 17:47:01 crc kubenswrapper[4858]: I0202 17:47:01.302296 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2" event={"ID":"888cd580-fe65-443a-ac8f-351364f34183","Type":"ContainerDied","Data":"72e1a3faaac29528b7861c46bb4d8b658771037579b9918ea4b6fdfcb58becb1"} Feb 02 17:47:02 crc kubenswrapper[4858]: I0202 17:47:02.701576 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2" Feb 02 17:47:02 crc kubenswrapper[4858]: I0202 17:47:02.800115 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vz4n\" (UniqueName: \"kubernetes.io/projected/888cd580-fe65-443a-ac8f-351364f34183-kube-api-access-6vz4n\") pod \"888cd580-fe65-443a-ac8f-351364f34183\" (UID: \"888cd580-fe65-443a-ac8f-351364f34183\") " Feb 02 17:47:02 crc kubenswrapper[4858]: I0202 17:47:02.800181 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/888cd580-fe65-443a-ac8f-351364f34183-ssh-key-openstack-edpm-ipam\") pod \"888cd580-fe65-443a-ac8f-351364f34183\" (UID: \"888cd580-fe65-443a-ac8f-351364f34183\") " Feb 02 17:47:02 crc kubenswrapper[4858]: I0202 17:47:02.800244 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/888cd580-fe65-443a-ac8f-351364f34183-neutron-ovn-metadata-agent-neutron-config-0\") pod \"888cd580-fe65-443a-ac8f-351364f34183\" (UID: \"888cd580-fe65-443a-ac8f-351364f34183\") " Feb 02 17:47:02 crc kubenswrapper[4858]: I0202 17:47:02.800271 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/888cd580-fe65-443a-ac8f-351364f34183-inventory\") pod \"888cd580-fe65-443a-ac8f-351364f34183\" (UID: \"888cd580-fe65-443a-ac8f-351364f34183\") " Feb 02 17:47:02 crc kubenswrapper[4858]: I0202 17:47:02.800297 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/888cd580-fe65-443a-ac8f-351364f34183-nova-metadata-neutron-config-0\") pod \"888cd580-fe65-443a-ac8f-351364f34183\" (UID: \"888cd580-fe65-443a-ac8f-351364f34183\") " Feb 02 17:47:02 crc kubenswrapper[4858]: I0202 17:47:02.800512 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/888cd580-fe65-443a-ac8f-351364f34183-neutron-metadata-combined-ca-bundle\") pod \"888cd580-fe65-443a-ac8f-351364f34183\" (UID: \"888cd580-fe65-443a-ac8f-351364f34183\") " Feb 02 17:47:02 crc kubenswrapper[4858]: I0202 17:47:02.808860 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/888cd580-fe65-443a-ac8f-351364f34183-kube-api-access-6vz4n" (OuterVolumeSpecName: "kube-api-access-6vz4n") pod "888cd580-fe65-443a-ac8f-351364f34183" (UID: "888cd580-fe65-443a-ac8f-351364f34183"). InnerVolumeSpecName "kube-api-access-6vz4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:47:02 crc kubenswrapper[4858]: I0202 17:47:02.810039 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/888cd580-fe65-443a-ac8f-351364f34183-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "888cd580-fe65-443a-ac8f-351364f34183" (UID: "888cd580-fe65-443a-ac8f-351364f34183"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:47:02 crc kubenswrapper[4858]: I0202 17:47:02.827695 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/888cd580-fe65-443a-ac8f-351364f34183-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "888cd580-fe65-443a-ac8f-351364f34183" (UID: "888cd580-fe65-443a-ac8f-351364f34183"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:47:02 crc kubenswrapper[4858]: I0202 17:47:02.829347 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/888cd580-fe65-443a-ac8f-351364f34183-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "888cd580-fe65-443a-ac8f-351364f34183" (UID: "888cd580-fe65-443a-ac8f-351364f34183"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:47:02 crc kubenswrapper[4858]: I0202 17:47:02.830665 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/888cd580-fe65-443a-ac8f-351364f34183-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "888cd580-fe65-443a-ac8f-351364f34183" (UID: "888cd580-fe65-443a-ac8f-351364f34183"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:47:02 crc kubenswrapper[4858]: I0202 17:47:02.837164 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/888cd580-fe65-443a-ac8f-351364f34183-inventory" (OuterVolumeSpecName: "inventory") pod "888cd580-fe65-443a-ac8f-351364f34183" (UID: "888cd580-fe65-443a-ac8f-351364f34183"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:47:02 crc kubenswrapper[4858]: I0202 17:47:02.902763 4858 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/888cd580-fe65-443a-ac8f-351364f34183-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:47:02 crc kubenswrapper[4858]: I0202 17:47:02.902805 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vz4n\" (UniqueName: \"kubernetes.io/projected/888cd580-fe65-443a-ac8f-351364f34183-kube-api-access-6vz4n\") on node \"crc\" DevicePath \"\"" Feb 02 17:47:02 crc kubenswrapper[4858]: I0202 17:47:02.902817 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/888cd580-fe65-443a-ac8f-351364f34183-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 17:47:02 crc kubenswrapper[4858]: I0202 17:47:02.902827 4858 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/888cd580-fe65-443a-ac8f-351364f34183-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 02 17:47:02 crc kubenswrapper[4858]: I0202 17:47:02.902840 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/888cd580-fe65-443a-ac8f-351364f34183-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 17:47:02 crc kubenswrapper[4858]: I0202 17:47:02.902848 4858 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/888cd580-fe65-443a-ac8f-351364f34183-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 02 17:47:03 crc kubenswrapper[4858]: I0202 17:47:03.320035 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2" event={"ID":"888cd580-fe65-443a-ac8f-351364f34183","Type":"ContainerDied","Data":"d0c78959ba6765179ceb9c2db2b7bd9bdc371f3fadb74507325f220ef73cfb02"} Feb 02 17:47:03 crc kubenswrapper[4858]: I0202 17:47:03.320308 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0c78959ba6765179ceb9c2db2b7bd9bdc371f3fadb74507325f220ef73cfb02" Feb 02 17:47:03 crc kubenswrapper[4858]: I0202 17:47:03.320104 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2" Feb 02 17:47:03 crc kubenswrapper[4858]: I0202 17:47:03.408920 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bqcrc"] Feb 02 17:47:03 crc kubenswrapper[4858]: E0202 17:47:03.409755 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="888cd580-fe65-443a-ac8f-351364f34183" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 02 17:47:03 crc kubenswrapper[4858]: I0202 17:47:03.409821 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="888cd580-fe65-443a-ac8f-351364f34183" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 02 17:47:03 crc kubenswrapper[4858]: I0202 17:47:03.410289 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="888cd580-fe65-443a-ac8f-351364f34183" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 02 17:47:03 crc kubenswrapper[4858]: I0202 17:47:03.410958 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bqcrc" Feb 02 17:47:03 crc kubenswrapper[4858]: I0202 17:47:03.416410 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 02 17:47:03 crc kubenswrapper[4858]: I0202 17:47:03.416495 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q7l94" Feb 02 17:47:03 crc kubenswrapper[4858]: I0202 17:47:03.416602 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 17:47:03 crc kubenswrapper[4858]: I0202 17:47:03.416691 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 17:47:03 crc kubenswrapper[4858]: I0202 17:47:03.416852 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 17:47:03 crc kubenswrapper[4858]: I0202 17:47:03.426379 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bqcrc"] Feb 02 17:47:03 crc kubenswrapper[4858]: I0202 17:47:03.514659 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2ab876f9-d750-4647-8212-6f9c4bee6eee-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bqcrc\" (UID: \"2ab876f9-d750-4647-8212-6f9c4bee6eee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bqcrc" Feb 02 17:47:03 crc kubenswrapper[4858]: I0202 17:47:03.514712 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ab876f9-d750-4647-8212-6f9c4bee6eee-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bqcrc\" (UID: \"2ab876f9-d750-4647-8212-6f9c4bee6eee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bqcrc" Feb 02 17:47:03 crc kubenswrapper[4858]: I0202 17:47:03.514861 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ab876f9-d750-4647-8212-6f9c4bee6eee-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bqcrc\" (UID: \"2ab876f9-d750-4647-8212-6f9c4bee6eee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bqcrc" Feb 02 17:47:03 crc kubenswrapper[4858]: I0202 17:47:03.515093 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t2zt\" (UniqueName: \"kubernetes.io/projected/2ab876f9-d750-4647-8212-6f9c4bee6eee-kube-api-access-5t2zt\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bqcrc\" (UID: \"2ab876f9-d750-4647-8212-6f9c4bee6eee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bqcrc" Feb 02 17:47:03 crc kubenswrapper[4858]: I0202 17:47:03.515236 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2ab876f9-d750-4647-8212-6f9c4bee6eee-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bqcrc\" (UID: \"2ab876f9-d750-4647-8212-6f9c4bee6eee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bqcrc" Feb 02 17:47:03 crc kubenswrapper[4858]: I0202 17:47:03.616442 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ab876f9-d750-4647-8212-6f9c4bee6eee-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bqcrc\" (UID: \"2ab876f9-d750-4647-8212-6f9c4bee6eee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bqcrc" Feb 02 17:47:03 crc kubenswrapper[4858]: I0202 17:47:03.616527 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t2zt\" (UniqueName: \"kubernetes.io/projected/2ab876f9-d750-4647-8212-6f9c4bee6eee-kube-api-access-5t2zt\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bqcrc\" (UID: \"2ab876f9-d750-4647-8212-6f9c4bee6eee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bqcrc" Feb 02 17:47:03 crc kubenswrapper[4858]: I0202 17:47:03.616573 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2ab876f9-d750-4647-8212-6f9c4bee6eee-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bqcrc\" (UID: \"2ab876f9-d750-4647-8212-6f9c4bee6eee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bqcrc" Feb 02 17:47:03 crc kubenswrapper[4858]: I0202 17:47:03.616602 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2ab876f9-d750-4647-8212-6f9c4bee6eee-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bqcrc\" (UID: \"2ab876f9-d750-4647-8212-6f9c4bee6eee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bqcrc" Feb 02 17:47:03 crc kubenswrapper[4858]: I0202 17:47:03.616626 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ab876f9-d750-4647-8212-6f9c4bee6eee-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bqcrc\" (UID: \"2ab876f9-d750-4647-8212-6f9c4bee6eee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bqcrc" Feb 02 17:47:03 crc kubenswrapper[4858]: I0202 17:47:03.620267 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ab876f9-d750-4647-8212-6f9c4bee6eee-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bqcrc\" (UID: \"2ab876f9-d750-4647-8212-6f9c4bee6eee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bqcrc" Feb 02 17:47:03 crc kubenswrapper[4858]: I0202 17:47:03.620693 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2ab876f9-d750-4647-8212-6f9c4bee6eee-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bqcrc\" (UID: \"2ab876f9-d750-4647-8212-6f9c4bee6eee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bqcrc" Feb 02 17:47:03 crc kubenswrapper[4858]: I0202 17:47:03.621459 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ab876f9-d750-4647-8212-6f9c4bee6eee-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bqcrc\" (UID: \"2ab876f9-d750-4647-8212-6f9c4bee6eee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bqcrc" Feb 02 17:47:03 crc kubenswrapper[4858]: I0202 17:47:03.624580 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2ab876f9-d750-4647-8212-6f9c4bee6eee-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bqcrc\" (UID: \"2ab876f9-d750-4647-8212-6f9c4bee6eee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bqcrc" Feb 02 17:47:03 crc kubenswrapper[4858]: I0202 17:47:03.642109 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t2zt\" (UniqueName: \"kubernetes.io/projected/2ab876f9-d750-4647-8212-6f9c4bee6eee-kube-api-access-5t2zt\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bqcrc\" (UID: \"2ab876f9-d750-4647-8212-6f9c4bee6eee\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bqcrc" Feb 02 17:47:03 crc kubenswrapper[4858]: I0202 17:47:03.732601 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bqcrc" Feb 02 17:47:04 crc kubenswrapper[4858]: I0202 17:47:04.247388 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bqcrc"] Feb 02 17:47:04 crc kubenswrapper[4858]: I0202 17:47:04.330066 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bqcrc" event={"ID":"2ab876f9-d750-4647-8212-6f9c4bee6eee","Type":"ContainerStarted","Data":"b20babc308a7cc626956460cfe49a30a5d5822a4144f51e6a53741ea4a274fea"} Feb 02 17:47:05 crc kubenswrapper[4858]: I0202 17:47:05.340087 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bqcrc" event={"ID":"2ab876f9-d750-4647-8212-6f9c4bee6eee","Type":"ContainerStarted","Data":"e50aa59762f3516e41f784c696524053504de6c0ab9517dd6c22de659610436f"} Feb 02 17:47:05 crc kubenswrapper[4858]: I0202 17:47:05.356646 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bqcrc" podStartSLOduration=1.8616989560000001 podStartE2EDuration="2.356621411s" podCreationTimestamp="2026-02-02 17:47:03 +0000 UTC" firstStartedPulling="2026-02-02 17:47:04.25162141 +0000 UTC m=+1925.404036675" lastFinishedPulling="2026-02-02 17:47:04.746543875 +0000 UTC m=+1925.898959130" observedRunningTime="2026-02-02 17:47:05.35485399 +0000 UTC m=+1926.507269255" watchObservedRunningTime="2026-02-02 17:47:05.356621411 +0000 UTC m=+1926.509036676" Feb 02 17:47:12 crc kubenswrapper[4858]: I0202 17:47:12.400760 4858 scope.go:117] "RemoveContainer" containerID="74530cf9b9be874cadd1d92f12bf70a28231d482712b7e73c15b0838788189fc" Feb 02 17:47:12 crc kubenswrapper[4858]: E0202 17:47:12.402062 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:47:25 crc kubenswrapper[4858]: I0202 17:47:25.401637 4858 scope.go:117] "RemoveContainer" containerID="74530cf9b9be874cadd1d92f12bf70a28231d482712b7e73c15b0838788189fc" Feb 02 17:47:25 crc kubenswrapper[4858]: E0202 17:47:25.402550 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:47:36 crc kubenswrapper[4858]: I0202 17:47:36.403251 4858 scope.go:117] "RemoveContainer" containerID="74530cf9b9be874cadd1d92f12bf70a28231d482712b7e73c15b0838788189fc" Feb 02 17:47:36 crc kubenswrapper[4858]: E0202 17:47:36.404395 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:47:51 crc kubenswrapper[4858]: I0202 17:47:51.401060 4858 scope.go:117] "RemoveContainer" containerID="74530cf9b9be874cadd1d92f12bf70a28231d482712b7e73c15b0838788189fc" Feb 02 17:47:51 crc kubenswrapper[4858]: E0202 17:47:51.401994 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:48:05 crc kubenswrapper[4858]: I0202 17:48:05.401809 4858 scope.go:117] "RemoveContainer" containerID="74530cf9b9be874cadd1d92f12bf70a28231d482712b7e73c15b0838788189fc" Feb 02 17:48:05 crc kubenswrapper[4858]: I0202 17:48:05.929110 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" event={"ID":"d03a4872-ca6a-4233-bdbf-b31f7890dc3e","Type":"ContainerStarted","Data":"f7314437f51bcd568115b72c0a6734244eecb76cbfce437d2cfdd1f5575dfab9"} Feb 02 17:49:05 crc kubenswrapper[4858]: I0202 17:49:05.487225 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-26v56"] Feb 02 17:49:05 crc kubenswrapper[4858]: I0202 17:49:05.489791 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-26v56" Feb 02 17:49:05 crc kubenswrapper[4858]: I0202 17:49:05.520259 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-26v56"] Feb 02 17:49:05 crc kubenswrapper[4858]: I0202 17:49:05.581417 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl4l9\" (UniqueName: \"kubernetes.io/projected/26aff673-3cfc-4ae8-a5df-42fc9ca9ec30-kube-api-access-rl4l9\") pod \"redhat-marketplace-26v56\" (UID: \"26aff673-3cfc-4ae8-a5df-42fc9ca9ec30\") " pod="openshift-marketplace/redhat-marketplace-26v56" Feb 02 17:49:05 crc kubenswrapper[4858]: I0202 17:49:05.581471 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26aff673-3cfc-4ae8-a5df-42fc9ca9ec30-catalog-content\") pod \"redhat-marketplace-26v56\" (UID: \"26aff673-3cfc-4ae8-a5df-42fc9ca9ec30\") " pod="openshift-marketplace/redhat-marketplace-26v56" Feb 02 17:49:05 crc kubenswrapper[4858]: I0202 17:49:05.581760 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26aff673-3cfc-4ae8-a5df-42fc9ca9ec30-utilities\") pod \"redhat-marketplace-26v56\" (UID: \"26aff673-3cfc-4ae8-a5df-42fc9ca9ec30\") " pod="openshift-marketplace/redhat-marketplace-26v56" Feb 02 17:49:05 crc kubenswrapper[4858]: I0202 17:49:05.683223 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl4l9\" (UniqueName: \"kubernetes.io/projected/26aff673-3cfc-4ae8-a5df-42fc9ca9ec30-kube-api-access-rl4l9\") pod \"redhat-marketplace-26v56\" (UID: \"26aff673-3cfc-4ae8-a5df-42fc9ca9ec30\") " pod="openshift-marketplace/redhat-marketplace-26v56" Feb 02 17:49:05 crc kubenswrapper[4858]: I0202 17:49:05.683284 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26aff673-3cfc-4ae8-a5df-42fc9ca9ec30-catalog-content\") pod \"redhat-marketplace-26v56\" (UID: \"26aff673-3cfc-4ae8-a5df-42fc9ca9ec30\") " pod="openshift-marketplace/redhat-marketplace-26v56" Feb 02 17:49:05 crc kubenswrapper[4858]: I0202 17:49:05.683394 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26aff673-3cfc-4ae8-a5df-42fc9ca9ec30-utilities\") pod \"redhat-marketplace-26v56\" (UID: \"26aff673-3cfc-4ae8-a5df-42fc9ca9ec30\") " pod="openshift-marketplace/redhat-marketplace-26v56" Feb 02 17:49:05 crc kubenswrapper[4858]: I0202 17:49:05.683868 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26aff673-3cfc-4ae8-a5df-42fc9ca9ec30-catalog-content\") pod \"redhat-marketplace-26v56\" (UID: \"26aff673-3cfc-4ae8-a5df-42fc9ca9ec30\") " pod="openshift-marketplace/redhat-marketplace-26v56" Feb 02 17:49:05 crc kubenswrapper[4858]: I0202 17:49:05.683986 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26aff673-3cfc-4ae8-a5df-42fc9ca9ec30-utilities\") pod \"redhat-marketplace-26v56\" (UID: \"26aff673-3cfc-4ae8-a5df-42fc9ca9ec30\") " pod="openshift-marketplace/redhat-marketplace-26v56" Feb 02 17:49:05 crc kubenswrapper[4858]: I0202 17:49:05.703209 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl4l9\" (UniqueName: \"kubernetes.io/projected/26aff673-3cfc-4ae8-a5df-42fc9ca9ec30-kube-api-access-rl4l9\") pod \"redhat-marketplace-26v56\" (UID: \"26aff673-3cfc-4ae8-a5df-42fc9ca9ec30\") " pod="openshift-marketplace/redhat-marketplace-26v56" Feb 02 17:49:05 crc kubenswrapper[4858]: I0202 17:49:05.826258 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-26v56" Feb 02 17:49:06 crc kubenswrapper[4858]: I0202 17:49:06.335652 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-26v56"] Feb 02 17:49:06 crc kubenswrapper[4858]: I0202 17:49:06.479659 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26v56" event={"ID":"26aff673-3cfc-4ae8-a5df-42fc9ca9ec30","Type":"ContainerStarted","Data":"0dc5767c9872550971d2cf16807fcf6a252c864d6ebe53b627247f6b602a8a61"} Feb 02 17:49:07 crc kubenswrapper[4858]: I0202 17:49:07.491453 4858 generic.go:334] "Generic (PLEG): container finished" podID="26aff673-3cfc-4ae8-a5df-42fc9ca9ec30" containerID="f79b2d6da4fbfcc006970e168dc80bd984d72e6cce5cbca4e5f1ce9dc1a97cf1" exitCode=0 Feb 02 17:49:07 crc kubenswrapper[4858]: I0202 17:49:07.491528 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26v56" event={"ID":"26aff673-3cfc-4ae8-a5df-42fc9ca9ec30","Type":"ContainerDied","Data":"f79b2d6da4fbfcc006970e168dc80bd984d72e6cce5cbca4e5f1ce9dc1a97cf1"} Feb 02 17:49:08 crc kubenswrapper[4858]: I0202 17:49:08.501243 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26v56" event={"ID":"26aff673-3cfc-4ae8-a5df-42fc9ca9ec30","Type":"ContainerStarted","Data":"9e44b5914e06a253f7c555e4c9546d688641f100c7bab047de82d392b8061b15"} Feb 02 17:49:09 crc kubenswrapper[4858]: I0202 17:49:09.510697 4858 generic.go:334] "Generic (PLEG): container finished" podID="26aff673-3cfc-4ae8-a5df-42fc9ca9ec30" containerID="9e44b5914e06a253f7c555e4c9546d688641f100c7bab047de82d392b8061b15" exitCode=0 Feb 02 17:49:09 crc kubenswrapper[4858]: I0202 17:49:09.510745 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26v56" event={"ID":"26aff673-3cfc-4ae8-a5df-42fc9ca9ec30","Type":"ContainerDied","Data":"9e44b5914e06a253f7c555e4c9546d688641f100c7bab047de82d392b8061b15"} Feb 02 17:49:11 crc kubenswrapper[4858]: I0202 17:49:11.530990 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26v56" event={"ID":"26aff673-3cfc-4ae8-a5df-42fc9ca9ec30","Type":"ContainerStarted","Data":"df00893c7e791d075ee2624a5db3c93cac666b88d817315bf9a07cc1a85cc108"} Feb 02 17:49:11 crc kubenswrapper[4858]: I0202 17:49:11.567140 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-26v56" podStartSLOduration=3.204221085 podStartE2EDuration="6.567115307s" podCreationTimestamp="2026-02-02 17:49:05 +0000 UTC" firstStartedPulling="2026-02-02 17:49:07.494330786 +0000 UTC m=+2048.646746091" lastFinishedPulling="2026-02-02 17:49:10.857225038 +0000 UTC m=+2052.009640313" observedRunningTime="2026-02-02 17:49:11.560074277 +0000 UTC m=+2052.712489572" watchObservedRunningTime="2026-02-02 17:49:11.567115307 +0000 UTC m=+2052.719530582" Feb 02 17:49:15 crc kubenswrapper[4858]: I0202 17:49:15.826918 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-26v56" Feb 02 17:49:15 crc kubenswrapper[4858]: I0202 17:49:15.827814 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-26v56" Feb 02 17:49:15 crc kubenswrapper[4858]: I0202 17:49:15.877874 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-26v56" Feb 02 17:49:16 crc kubenswrapper[4858]: I0202 17:49:16.620115 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-26v56" Feb 02 17:49:16 crc kubenswrapper[4858]: I0202 17:49:16.700783 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-26v56"] Feb 02 17:49:18 crc kubenswrapper[4858]: I0202 17:49:18.589859 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-26v56" podUID="26aff673-3cfc-4ae8-a5df-42fc9ca9ec30" containerName="registry-server" containerID="cri-o://df00893c7e791d075ee2624a5db3c93cac666b88d817315bf9a07cc1a85cc108" gracePeriod=2 Feb 02 17:49:19 crc kubenswrapper[4858]: I0202 17:49:19.049830 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-26v56" Feb 02 17:49:19 crc kubenswrapper[4858]: I0202 17:49:19.145195 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26aff673-3cfc-4ae8-a5df-42fc9ca9ec30-utilities\") pod \"26aff673-3cfc-4ae8-a5df-42fc9ca9ec30\" (UID: \"26aff673-3cfc-4ae8-a5df-42fc9ca9ec30\") " Feb 02 17:49:19 crc kubenswrapper[4858]: I0202 17:49:19.145653 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26aff673-3cfc-4ae8-a5df-42fc9ca9ec30-catalog-content\") pod \"26aff673-3cfc-4ae8-a5df-42fc9ca9ec30\" (UID: \"26aff673-3cfc-4ae8-a5df-42fc9ca9ec30\") " Feb 02 17:49:19 crc kubenswrapper[4858]: I0202 17:49:19.145844 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rl4l9\" (UniqueName: \"kubernetes.io/projected/26aff673-3cfc-4ae8-a5df-42fc9ca9ec30-kube-api-access-rl4l9\") pod \"26aff673-3cfc-4ae8-a5df-42fc9ca9ec30\" (UID: \"26aff673-3cfc-4ae8-a5df-42fc9ca9ec30\") " Feb 02 17:49:19 crc kubenswrapper[4858]: I0202 17:49:19.147007 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26aff673-3cfc-4ae8-a5df-42fc9ca9ec30-utilities" (OuterVolumeSpecName: "utilities") pod "26aff673-3cfc-4ae8-a5df-42fc9ca9ec30" (UID: "26aff673-3cfc-4ae8-a5df-42fc9ca9ec30"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:49:19 crc kubenswrapper[4858]: I0202 17:49:19.153228 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26aff673-3cfc-4ae8-a5df-42fc9ca9ec30-kube-api-access-rl4l9" (OuterVolumeSpecName: "kube-api-access-rl4l9") pod "26aff673-3cfc-4ae8-a5df-42fc9ca9ec30" (UID: "26aff673-3cfc-4ae8-a5df-42fc9ca9ec30"). InnerVolumeSpecName "kube-api-access-rl4l9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:49:19 crc kubenswrapper[4858]: I0202 17:49:19.171655 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26aff673-3cfc-4ae8-a5df-42fc9ca9ec30-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "26aff673-3cfc-4ae8-a5df-42fc9ca9ec30" (UID: "26aff673-3cfc-4ae8-a5df-42fc9ca9ec30"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:49:19 crc kubenswrapper[4858]: I0202 17:49:19.248309 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26aff673-3cfc-4ae8-a5df-42fc9ca9ec30-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 17:49:19 crc kubenswrapper[4858]: I0202 17:49:19.248350 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rl4l9\" (UniqueName: \"kubernetes.io/projected/26aff673-3cfc-4ae8-a5df-42fc9ca9ec30-kube-api-access-rl4l9\") on node \"crc\" DevicePath \"\"" Feb 02 17:49:19 crc kubenswrapper[4858]: I0202 17:49:19.248363 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26aff673-3cfc-4ae8-a5df-42fc9ca9ec30-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 17:49:19 crc kubenswrapper[4858]: I0202 17:49:19.600084 4858 generic.go:334] "Generic (PLEG): container finished" podID="26aff673-3cfc-4ae8-a5df-42fc9ca9ec30" containerID="df00893c7e791d075ee2624a5db3c93cac666b88d817315bf9a07cc1a85cc108" exitCode=0 Feb 02 17:49:19 crc kubenswrapper[4858]: I0202 17:49:19.600126 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26v56" event={"ID":"26aff673-3cfc-4ae8-a5df-42fc9ca9ec30","Type":"ContainerDied","Data":"df00893c7e791d075ee2624a5db3c93cac666b88d817315bf9a07cc1a85cc108"} Feb 02 17:49:19 crc kubenswrapper[4858]: I0202 17:49:19.600156 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26v56" event={"ID":"26aff673-3cfc-4ae8-a5df-42fc9ca9ec30","Type":"ContainerDied","Data":"0dc5767c9872550971d2cf16807fcf6a252c864d6ebe53b627247f6b602a8a61"} Feb 02 17:49:19 crc kubenswrapper[4858]: I0202 17:49:19.600178 4858 scope.go:117] "RemoveContainer" containerID="df00893c7e791d075ee2624a5db3c93cac666b88d817315bf9a07cc1a85cc108" Feb 02 17:49:19 crc kubenswrapper[4858]: I0202 17:49:19.602103 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-26v56" Feb 02 17:49:19 crc kubenswrapper[4858]: I0202 17:49:19.619172 4858 scope.go:117] "RemoveContainer" containerID="9e44b5914e06a253f7c555e4c9546d688641f100c7bab047de82d392b8061b15" Feb 02 17:49:19 crc kubenswrapper[4858]: I0202 17:49:19.638804 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-26v56"] Feb 02 17:49:19 crc kubenswrapper[4858]: I0202 17:49:19.650539 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-26v56"] Feb 02 17:49:19 crc kubenswrapper[4858]: I0202 17:49:19.665109 4858 scope.go:117] "RemoveContainer" containerID="f79b2d6da4fbfcc006970e168dc80bd984d72e6cce5cbca4e5f1ce9dc1a97cf1" Feb 02 17:49:19 crc kubenswrapper[4858]: I0202 17:49:19.691133 4858 scope.go:117] "RemoveContainer" containerID="df00893c7e791d075ee2624a5db3c93cac666b88d817315bf9a07cc1a85cc108" Feb 02 17:49:19 crc kubenswrapper[4858]: E0202 17:49:19.691667 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df00893c7e791d075ee2624a5db3c93cac666b88d817315bf9a07cc1a85cc108\": container with ID starting with df00893c7e791d075ee2624a5db3c93cac666b88d817315bf9a07cc1a85cc108 not found: ID does not exist" containerID="df00893c7e791d075ee2624a5db3c93cac666b88d817315bf9a07cc1a85cc108" Feb 02 17:49:19 crc kubenswrapper[4858]: I0202 17:49:19.691710 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df00893c7e791d075ee2624a5db3c93cac666b88d817315bf9a07cc1a85cc108"} err="failed to get container status \"df00893c7e791d075ee2624a5db3c93cac666b88d817315bf9a07cc1a85cc108\": rpc error: code = NotFound desc = could not find container \"df00893c7e791d075ee2624a5db3c93cac666b88d817315bf9a07cc1a85cc108\": container with ID starting with df00893c7e791d075ee2624a5db3c93cac666b88d817315bf9a07cc1a85cc108 not found: ID does not exist" Feb 02 17:49:19 crc kubenswrapper[4858]: I0202 17:49:19.691739 4858 scope.go:117] "RemoveContainer" containerID="9e44b5914e06a253f7c555e4c9546d688641f100c7bab047de82d392b8061b15" Feb 02 17:49:19 crc kubenswrapper[4858]: E0202 17:49:19.692166 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e44b5914e06a253f7c555e4c9546d688641f100c7bab047de82d392b8061b15\": container with ID starting with 9e44b5914e06a253f7c555e4c9546d688641f100c7bab047de82d392b8061b15 not found: ID does not exist" containerID="9e44b5914e06a253f7c555e4c9546d688641f100c7bab047de82d392b8061b15" Feb 02 17:49:19 crc kubenswrapper[4858]: I0202 17:49:19.692202 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e44b5914e06a253f7c555e4c9546d688641f100c7bab047de82d392b8061b15"} err="failed to get container status \"9e44b5914e06a253f7c555e4c9546d688641f100c7bab047de82d392b8061b15\": rpc error: code = NotFound desc = could not find container \"9e44b5914e06a253f7c555e4c9546d688641f100c7bab047de82d392b8061b15\": container with ID starting with 9e44b5914e06a253f7c555e4c9546d688641f100c7bab047de82d392b8061b15 not found: ID does not exist" Feb 02 17:49:19 crc kubenswrapper[4858]: I0202 17:49:19.692238 4858 scope.go:117] "RemoveContainer" containerID="f79b2d6da4fbfcc006970e168dc80bd984d72e6cce5cbca4e5f1ce9dc1a97cf1" Feb 02 17:49:19 crc kubenswrapper[4858]: E0202 17:49:19.692588 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f79b2d6da4fbfcc006970e168dc80bd984d72e6cce5cbca4e5f1ce9dc1a97cf1\": container with ID starting with f79b2d6da4fbfcc006970e168dc80bd984d72e6cce5cbca4e5f1ce9dc1a97cf1 not found: ID does not exist" containerID="f79b2d6da4fbfcc006970e168dc80bd984d72e6cce5cbca4e5f1ce9dc1a97cf1" Feb 02 17:49:19 crc kubenswrapper[4858]: I0202 17:49:19.692618 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f79b2d6da4fbfcc006970e168dc80bd984d72e6cce5cbca4e5f1ce9dc1a97cf1"} err="failed to get container status \"f79b2d6da4fbfcc006970e168dc80bd984d72e6cce5cbca4e5f1ce9dc1a97cf1\": rpc error: code = NotFound desc = could not find container \"f79b2d6da4fbfcc006970e168dc80bd984d72e6cce5cbca4e5f1ce9dc1a97cf1\": container with ID starting with f79b2d6da4fbfcc006970e168dc80bd984d72e6cce5cbca4e5f1ce9dc1a97cf1 not found: ID does not exist" Feb 02 17:49:20 crc kubenswrapper[4858]: I0202 17:49:20.412300 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26aff673-3cfc-4ae8-a5df-42fc9ca9ec30" path="/var/lib/kubelet/pods/26aff673-3cfc-4ae8-a5df-42fc9ca9ec30/volumes" Feb 02 17:50:27 crc kubenswrapper[4858]: I0202 17:50:27.808071 4858 patch_prober.go:28] interesting pod/machine-config-daemon-lbvl2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 17:50:27 crc kubenswrapper[4858]: I0202 17:50:27.808769 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 17:50:48 crc kubenswrapper[4858]: I0202 17:50:48.452614 4858 generic.go:334] "Generic (PLEG): container finished" podID="2ab876f9-d750-4647-8212-6f9c4bee6eee" containerID="e50aa59762f3516e41f784c696524053504de6c0ab9517dd6c22de659610436f" exitCode=0 Feb 02 17:50:48 crc kubenswrapper[4858]: I0202 17:50:48.452695 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bqcrc" event={"ID":"2ab876f9-d750-4647-8212-6f9c4bee6eee","Type":"ContainerDied","Data":"e50aa59762f3516e41f784c696524053504de6c0ab9517dd6c22de659610436f"} Feb 02 17:50:49 crc kubenswrapper[4858]: I0202 17:50:49.934028 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bqcrc" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.001081 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ab876f9-d750-4647-8212-6f9c4bee6eee-libvirt-combined-ca-bundle\") pod \"2ab876f9-d750-4647-8212-6f9c4bee6eee\" (UID: \"2ab876f9-d750-4647-8212-6f9c4bee6eee\") " Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.001241 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2ab876f9-d750-4647-8212-6f9c4bee6eee-libvirt-secret-0\") pod \"2ab876f9-d750-4647-8212-6f9c4bee6eee\" (UID: \"2ab876f9-d750-4647-8212-6f9c4bee6eee\") " Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.001351 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5t2zt\" (UniqueName: \"kubernetes.io/projected/2ab876f9-d750-4647-8212-6f9c4bee6eee-kube-api-access-5t2zt\") pod \"2ab876f9-d750-4647-8212-6f9c4bee6eee\" (UID: \"2ab876f9-d750-4647-8212-6f9c4bee6eee\") " Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.001450 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2ab876f9-d750-4647-8212-6f9c4bee6eee-ssh-key-openstack-edpm-ipam\") pod \"2ab876f9-d750-4647-8212-6f9c4bee6eee\" (UID: \"2ab876f9-d750-4647-8212-6f9c4bee6eee\") " Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.001519 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ab876f9-d750-4647-8212-6f9c4bee6eee-inventory\") pod \"2ab876f9-d750-4647-8212-6f9c4bee6eee\" (UID: \"2ab876f9-d750-4647-8212-6f9c4bee6eee\") " Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.031676 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ab876f9-d750-4647-8212-6f9c4bee6eee-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "2ab876f9-d750-4647-8212-6f9c4bee6eee" (UID: "2ab876f9-d750-4647-8212-6f9c4bee6eee"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.032356 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ab876f9-d750-4647-8212-6f9c4bee6eee-kube-api-access-5t2zt" (OuterVolumeSpecName: "kube-api-access-5t2zt") pod "2ab876f9-d750-4647-8212-6f9c4bee6eee" (UID: "2ab876f9-d750-4647-8212-6f9c4bee6eee"). InnerVolumeSpecName "kube-api-access-5t2zt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.057336 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ab876f9-d750-4647-8212-6f9c4bee6eee-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "2ab876f9-d750-4647-8212-6f9c4bee6eee" (UID: "2ab876f9-d750-4647-8212-6f9c4bee6eee"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.066218 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ab876f9-d750-4647-8212-6f9c4bee6eee-inventory" (OuterVolumeSpecName: "inventory") pod "2ab876f9-d750-4647-8212-6f9c4bee6eee" (UID: "2ab876f9-d750-4647-8212-6f9c4bee6eee"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.069854 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ab876f9-d750-4647-8212-6f9c4bee6eee-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2ab876f9-d750-4647-8212-6f9c4bee6eee" (UID: "2ab876f9-d750-4647-8212-6f9c4bee6eee"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.103876 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2ab876f9-d750-4647-8212-6f9c4bee6eee-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.103911 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ab876f9-d750-4647-8212-6f9c4bee6eee-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.103921 4858 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ab876f9-d750-4647-8212-6f9c4bee6eee-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.103932 4858 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/2ab876f9-d750-4647-8212-6f9c4bee6eee-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.103942 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5t2zt\" (UniqueName: \"kubernetes.io/projected/2ab876f9-d750-4647-8212-6f9c4bee6eee-kube-api-access-5t2zt\") on node \"crc\" DevicePath \"\"" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.472671 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bqcrc" event={"ID":"2ab876f9-d750-4647-8212-6f9c4bee6eee","Type":"ContainerDied","Data":"b20babc308a7cc626956460cfe49a30a5d5822a4144f51e6a53741ea4a274fea"} Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.473235 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b20babc308a7cc626956460cfe49a30a5d5822a4144f51e6a53741ea4a274fea" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.472759 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bqcrc" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.573075 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-q9xnp"] Feb 02 17:50:50 crc kubenswrapper[4858]: E0202 17:50:50.573492 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ab876f9-d750-4647-8212-6f9c4bee6eee" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.573515 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ab876f9-d750-4647-8212-6f9c4bee6eee" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 02 17:50:50 crc kubenswrapper[4858]: E0202 17:50:50.573538 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26aff673-3cfc-4ae8-a5df-42fc9ca9ec30" containerName="registry-server" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.573545 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="26aff673-3cfc-4ae8-a5df-42fc9ca9ec30" containerName="registry-server" Feb 02 17:50:50 crc kubenswrapper[4858]: E0202 17:50:50.573559 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26aff673-3cfc-4ae8-a5df-42fc9ca9ec30" containerName="extract-utilities" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.573565 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="26aff673-3cfc-4ae8-a5df-42fc9ca9ec30" containerName="extract-utilities" Feb 02 17:50:50 crc kubenswrapper[4858]: E0202 17:50:50.573576 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26aff673-3cfc-4ae8-a5df-42fc9ca9ec30" containerName="extract-content" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.573581 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="26aff673-3cfc-4ae8-a5df-42fc9ca9ec30" containerName="extract-content" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.573767 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="26aff673-3cfc-4ae8-a5df-42fc9ca9ec30" containerName="registry-server" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.573795 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ab876f9-d750-4647-8212-6f9c4bee6eee" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.574467 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9xnp" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.577391 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.577755 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.578195 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.578280 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.578307 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.578406 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.578450 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q7l94" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.597191 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-q9xnp"] Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.615243 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9xnp\" (UID: \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9xnp" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.615338 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9xnp\" (UID: \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9xnp" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.615389 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9xnp\" (UID: \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9xnp" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.615420 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9xnp\" (UID: \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9xnp" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.615501 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9xnp\" (UID: \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9xnp" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.615542 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9xnp\" (UID: \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9xnp" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.615578 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9xnp\" (UID: \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9xnp" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.615618 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwrmd\" (UniqueName: \"kubernetes.io/projected/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-kube-api-access-wwrmd\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9xnp\" (UID: \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9xnp" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.615662 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9xnp\" (UID: \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9xnp" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.717606 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9xnp\" (UID: \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9xnp" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.717692 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9xnp\" (UID: \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9xnp" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.717770 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9xnp\" (UID: \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9xnp" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.717810 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9xnp\" (UID: \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9xnp" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.717839 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9xnp\" (UID: \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9xnp" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.717873 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwrmd\" (UniqueName: \"kubernetes.io/projected/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-kube-api-access-wwrmd\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9xnp\" (UID: \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9xnp" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.717906 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9xnp\" (UID: \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9xnp" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.717973 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9xnp\" (UID: \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9xnp" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.718035 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9xnp\" (UID: \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9xnp" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.719736 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9xnp\" (UID: \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9xnp" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.724813 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9xnp\" (UID: \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9xnp" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.724918 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9xnp\" (UID: \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9xnp" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.725058 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9xnp\" (UID: \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9xnp" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.725549 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9xnp\" (UID: \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9xnp" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.726081 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9xnp\" (UID: \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9xnp" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.727430 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9xnp\" (UID: \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9xnp" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.728674 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9xnp\" (UID: \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9xnp" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.740330 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwrmd\" (UniqueName: \"kubernetes.io/projected/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-kube-api-access-wwrmd\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q9xnp\" (UID: \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9xnp" Feb 02 17:50:50 crc kubenswrapper[4858]: I0202 17:50:50.893225 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9xnp" Feb 02 17:50:51 crc kubenswrapper[4858]: I0202 17:50:51.444865 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-q9xnp"] Feb 02 17:50:51 crc kubenswrapper[4858]: I0202 17:50:51.486321 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9xnp" event={"ID":"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4","Type":"ContainerStarted","Data":"72939e2a98adae9cefbd69818a7562d58758fe0fd7283ffb69ca7b5401926caa"} Feb 02 17:50:52 crc kubenswrapper[4858]: I0202 17:50:52.494653 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9xnp" event={"ID":"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4","Type":"ContainerStarted","Data":"43a9093d2d02a46786874717e3ab2f7b03c6df039eaf5ba4ca4ed687618a981d"} Feb 02 17:50:52 crc kubenswrapper[4858]: I0202 17:50:52.521629 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9xnp" podStartSLOduration=2.071305899 podStartE2EDuration="2.521610589s" podCreationTimestamp="2026-02-02 17:50:50 +0000 UTC" firstStartedPulling="2026-02-02 17:50:51.456106787 +0000 UTC m=+2152.608522052" lastFinishedPulling="2026-02-02 17:50:51.906411477 +0000 UTC m=+2153.058826742" observedRunningTime="2026-02-02 17:50:52.517226799 +0000 UTC m=+2153.669642064" watchObservedRunningTime="2026-02-02 17:50:52.521610589 +0000 UTC m=+2153.674025844" Feb 02 17:50:57 crc kubenswrapper[4858]: I0202 17:50:57.807895 4858 patch_prober.go:28] interesting pod/machine-config-daemon-lbvl2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 17:50:57 crc kubenswrapper[4858]: I0202 17:50:57.808528 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 17:51:27 crc kubenswrapper[4858]: I0202 17:51:27.808328 4858 patch_prober.go:28] interesting pod/machine-config-daemon-lbvl2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 17:51:27 crc kubenswrapper[4858]: I0202 17:51:27.808837 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 17:51:27 crc kubenswrapper[4858]: I0202 17:51:27.808882 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" Feb 02 17:51:27 crc kubenswrapper[4858]: I0202 17:51:27.809754 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f7314437f51bcd568115b72c0a6734244eecb76cbfce437d2cfdd1f5575dfab9"} pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 17:51:27 crc kubenswrapper[4858]: I0202 17:51:27.809819 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" containerID="cri-o://f7314437f51bcd568115b72c0a6734244eecb76cbfce437d2cfdd1f5575dfab9" gracePeriod=600 Feb 02 17:51:28 crc kubenswrapper[4858]: I0202 17:51:28.863340 4858 generic.go:334] "Generic (PLEG): container finished" podID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerID="f7314437f51bcd568115b72c0a6734244eecb76cbfce437d2cfdd1f5575dfab9" exitCode=0 Feb 02 17:51:28 crc kubenswrapper[4858]: I0202 17:51:28.863409 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" event={"ID":"d03a4872-ca6a-4233-bdbf-b31f7890dc3e","Type":"ContainerDied","Data":"f7314437f51bcd568115b72c0a6734244eecb76cbfce437d2cfdd1f5575dfab9"} Feb 02 17:51:28 crc kubenswrapper[4858]: I0202 17:51:28.863929 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" event={"ID":"d03a4872-ca6a-4233-bdbf-b31f7890dc3e","Type":"ContainerStarted","Data":"14c0595ecfc392eae362e391092ca630b3ab65f45c68441d2d3c09ae407972df"} Feb 02 17:51:28 crc kubenswrapper[4858]: I0202 17:51:28.863949 4858 scope.go:117] "RemoveContainer" containerID="74530cf9b9be874cadd1d92f12bf70a28231d482712b7e73c15b0838788189fc" Feb 02 17:52:51 crc kubenswrapper[4858]: I0202 17:52:51.690400 4858 generic.go:334] "Generic (PLEG): container finished" podID="6b5342fc-b2c3-4a83-a74d-a49a34ac15a4" containerID="43a9093d2d02a46786874717e3ab2f7b03c6df039eaf5ba4ca4ed687618a981d" exitCode=0 Feb 02 17:52:51 crc kubenswrapper[4858]: I0202 17:52:51.690494 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9xnp" event={"ID":"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4","Type":"ContainerDied","Data":"43a9093d2d02a46786874717e3ab2f7b03c6df039eaf5ba4ca4ed687618a981d"} Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.120027 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9xnp" Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.227184 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-nova-migration-ssh-key-0\") pod \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\" (UID: \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\") " Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.227248 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-nova-cell1-compute-config-0\") pod \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\" (UID: \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\") " Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.227335 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-inventory\") pod \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\" (UID: \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\") " Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.227363 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-nova-extra-config-0\") pod \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\" (UID: \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\") " Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.227497 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-nova-combined-ca-bundle\") pod \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\" (UID: \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\") " Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.227563 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwrmd\" (UniqueName: \"kubernetes.io/projected/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-kube-api-access-wwrmd\") pod \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\" (UID: \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\") " Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.227586 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-nova-migration-ssh-key-1\") pod \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\" (UID: \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\") " Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.227632 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-nova-cell1-compute-config-1\") pod \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\" (UID: \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\") " Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.227672 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-ssh-key-openstack-edpm-ipam\") pod \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\" (UID: \"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4\") " Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.234720 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "6b5342fc-b2c3-4a83-a74d-a49a34ac15a4" (UID: "6b5342fc-b2c3-4a83-a74d-a49a34ac15a4"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.235737 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-kube-api-access-wwrmd" (OuterVolumeSpecName: "kube-api-access-wwrmd") pod "6b5342fc-b2c3-4a83-a74d-a49a34ac15a4" (UID: "6b5342fc-b2c3-4a83-a74d-a49a34ac15a4"). InnerVolumeSpecName "kube-api-access-wwrmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.258570 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "6b5342fc-b2c3-4a83-a74d-a49a34ac15a4" (UID: "6b5342fc-b2c3-4a83-a74d-a49a34ac15a4"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.260673 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "6b5342fc-b2c3-4a83-a74d-a49a34ac15a4" (UID: "6b5342fc-b2c3-4a83-a74d-a49a34ac15a4"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.262257 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "6b5342fc-b2c3-4a83-a74d-a49a34ac15a4" (UID: "6b5342fc-b2c3-4a83-a74d-a49a34ac15a4"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.265321 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6b5342fc-b2c3-4a83-a74d-a49a34ac15a4" (UID: "6b5342fc-b2c3-4a83-a74d-a49a34ac15a4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.271919 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "6b5342fc-b2c3-4a83-a74d-a49a34ac15a4" (UID: "6b5342fc-b2c3-4a83-a74d-a49a34ac15a4"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.274306 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "6b5342fc-b2c3-4a83-a74d-a49a34ac15a4" (UID: "6b5342fc-b2c3-4a83-a74d-a49a34ac15a4"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.285290 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-inventory" (OuterVolumeSpecName: "inventory") pod "6b5342fc-b2c3-4a83-a74d-a49a34ac15a4" (UID: "6b5342fc-b2c3-4a83-a74d-a49a34ac15a4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.330495 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwrmd\" (UniqueName: \"kubernetes.io/projected/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-kube-api-access-wwrmd\") on node \"crc\" DevicePath \"\"" Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.330547 4858 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.330559 4858 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.330574 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.330587 4858 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.330599 4858 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.330610 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.330621 4858 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.330631 4858 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b5342fc-b2c3-4a83-a74d-a49a34ac15a4-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.710245 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9xnp" event={"ID":"6b5342fc-b2c3-4a83-a74d-a49a34ac15a4","Type":"ContainerDied","Data":"72939e2a98adae9cefbd69818a7562d58758fe0fd7283ffb69ca7b5401926caa"} Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.710291 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72939e2a98adae9cefbd69818a7562d58758fe0fd7283ffb69ca7b5401926caa" Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.710297 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q9xnp" Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.816759 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-spthf"] Feb 02 17:52:53 crc kubenswrapper[4858]: E0202 17:52:53.817218 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b5342fc-b2c3-4a83-a74d-a49a34ac15a4" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.817240 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b5342fc-b2c3-4a83-a74d-a49a34ac15a4" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.817511 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b5342fc-b2c3-4a83-a74d-a49a34ac15a4" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.819227 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-spthf" Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.821873 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.822011 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q7l94" Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.822612 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.822771 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.822783 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.835238 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-spthf"] Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.943303 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd969e2b-6db6-4175-8fa3-7dfa60a198ca-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-spthf\" (UID: \"dd969e2b-6db6-4175-8fa3-7dfa60a198ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-spthf" Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.943390 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/dd969e2b-6db6-4175-8fa3-7dfa60a198ca-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-spthf\" (UID: \"dd969e2b-6db6-4175-8fa3-7dfa60a198ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-spthf" Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.943423 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/dd969e2b-6db6-4175-8fa3-7dfa60a198ca-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-spthf\" (UID: \"dd969e2b-6db6-4175-8fa3-7dfa60a198ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-spthf" Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.943610 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/dd969e2b-6db6-4175-8fa3-7dfa60a198ca-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-spthf\" (UID: \"dd969e2b-6db6-4175-8fa3-7dfa60a198ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-spthf" Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.943742 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcs47\" (UniqueName: \"kubernetes.io/projected/dd969e2b-6db6-4175-8fa3-7dfa60a198ca-kube-api-access-mcs47\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-spthf\" (UID: \"dd969e2b-6db6-4175-8fa3-7dfa60a198ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-spthf" Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.943817 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd969e2b-6db6-4175-8fa3-7dfa60a198ca-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-spthf\" (UID: \"dd969e2b-6db6-4175-8fa3-7dfa60a198ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-spthf" Feb 02 17:52:53 crc kubenswrapper[4858]: I0202 17:52:53.943852 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd969e2b-6db6-4175-8fa3-7dfa60a198ca-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-spthf\" (UID: \"dd969e2b-6db6-4175-8fa3-7dfa60a198ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-spthf" Feb 02 17:52:54 crc kubenswrapper[4858]: I0202 17:52:54.046014 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd969e2b-6db6-4175-8fa3-7dfa60a198ca-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-spthf\" (UID: \"dd969e2b-6db6-4175-8fa3-7dfa60a198ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-spthf" Feb 02 17:52:54 crc kubenswrapper[4858]: I0202 17:52:54.046092 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/dd969e2b-6db6-4175-8fa3-7dfa60a198ca-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-spthf\" (UID: \"dd969e2b-6db6-4175-8fa3-7dfa60a198ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-spthf" Feb 02 17:52:54 crc kubenswrapper[4858]: I0202 17:52:54.046119 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/dd969e2b-6db6-4175-8fa3-7dfa60a198ca-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-spthf\" (UID: \"dd969e2b-6db6-4175-8fa3-7dfa60a198ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-spthf" Feb 02 17:52:54 crc kubenswrapper[4858]: I0202 17:52:54.046178 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/dd969e2b-6db6-4175-8fa3-7dfa60a198ca-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-spthf\" (UID: \"dd969e2b-6db6-4175-8fa3-7dfa60a198ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-spthf" Feb 02 17:52:54 crc kubenswrapper[4858]: I0202 17:52:54.046214 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcs47\" (UniqueName: \"kubernetes.io/projected/dd969e2b-6db6-4175-8fa3-7dfa60a198ca-kube-api-access-mcs47\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-spthf\" (UID: \"dd969e2b-6db6-4175-8fa3-7dfa60a198ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-spthf" Feb 02 17:52:54 crc kubenswrapper[4858]: I0202 17:52:54.046246 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd969e2b-6db6-4175-8fa3-7dfa60a198ca-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-spthf\" (UID: \"dd969e2b-6db6-4175-8fa3-7dfa60a198ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-spthf" Feb 02 17:52:54 crc kubenswrapper[4858]: I0202 17:52:54.046263 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd969e2b-6db6-4175-8fa3-7dfa60a198ca-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-spthf\" (UID: \"dd969e2b-6db6-4175-8fa3-7dfa60a198ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-spthf" Feb 02 17:52:54 crc kubenswrapper[4858]: I0202 17:52:54.050395 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd969e2b-6db6-4175-8fa3-7dfa60a198ca-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-spthf\" (UID: \"dd969e2b-6db6-4175-8fa3-7dfa60a198ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-spthf" Feb 02 17:52:54 crc kubenswrapper[4858]: I0202 17:52:54.050534 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd969e2b-6db6-4175-8fa3-7dfa60a198ca-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-spthf\" (UID: \"dd969e2b-6db6-4175-8fa3-7dfa60a198ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-spthf" Feb 02 17:52:54 crc kubenswrapper[4858]: I0202 17:52:54.051163 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/dd969e2b-6db6-4175-8fa3-7dfa60a198ca-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-spthf\" (UID: \"dd969e2b-6db6-4175-8fa3-7dfa60a198ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-spthf" Feb 02 17:52:54 crc kubenswrapper[4858]: I0202 17:52:54.051654 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd969e2b-6db6-4175-8fa3-7dfa60a198ca-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-spthf\" (UID: \"dd969e2b-6db6-4175-8fa3-7dfa60a198ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-spthf" Feb 02 17:52:54 crc kubenswrapper[4858]: I0202 17:52:54.051667 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/dd969e2b-6db6-4175-8fa3-7dfa60a198ca-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-spthf\" (UID: \"dd969e2b-6db6-4175-8fa3-7dfa60a198ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-spthf" Feb 02 17:52:54 crc kubenswrapper[4858]: I0202 17:52:54.052675 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/dd969e2b-6db6-4175-8fa3-7dfa60a198ca-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-spthf\" (UID: \"dd969e2b-6db6-4175-8fa3-7dfa60a198ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-spthf" Feb 02 17:52:54 crc kubenswrapper[4858]: I0202 17:52:54.072402 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcs47\" (UniqueName: \"kubernetes.io/projected/dd969e2b-6db6-4175-8fa3-7dfa60a198ca-kube-api-access-mcs47\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-spthf\" (UID: \"dd969e2b-6db6-4175-8fa3-7dfa60a198ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-spthf" Feb 02 17:52:54 crc kubenswrapper[4858]: I0202 17:52:54.137885 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-spthf" Feb 02 17:52:54 crc kubenswrapper[4858]: I0202 17:52:54.677113 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-spthf"] Feb 02 17:52:54 crc kubenswrapper[4858]: I0202 17:52:54.687790 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 17:52:54 crc kubenswrapper[4858]: I0202 17:52:54.736444 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-spthf" event={"ID":"dd969e2b-6db6-4175-8fa3-7dfa60a198ca","Type":"ContainerStarted","Data":"e3dbe4147d375ff1d5627bcd4383d787985b57b5361df3a451529579d3f49744"} Feb 02 17:52:55 crc kubenswrapper[4858]: I0202 17:52:55.747389 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-spthf" event={"ID":"dd969e2b-6db6-4175-8fa3-7dfa60a198ca","Type":"ContainerStarted","Data":"ba22b0a111d57f4e88302f300505b24d4cfcc03508d41e4d34742502fed729b6"} Feb 02 17:52:55 crc kubenswrapper[4858]: I0202 17:52:55.770076 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-spthf" podStartSLOduration=2.326074146 podStartE2EDuration="2.770051968s" podCreationTimestamp="2026-02-02 17:52:53 +0000 UTC" firstStartedPulling="2026-02-02 17:52:54.687567822 +0000 UTC m=+2275.839983087" lastFinishedPulling="2026-02-02 17:52:55.131545644 +0000 UTC m=+2276.283960909" observedRunningTime="2026-02-02 17:52:55.764121782 +0000 UTC m=+2276.916537067" watchObservedRunningTime="2026-02-02 17:52:55.770051968 +0000 UTC m=+2276.922467253" Feb 02 17:53:00 crc kubenswrapper[4858]: I0202 17:53:00.248049 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-66tww"] Feb 02 17:53:00 crc kubenswrapper[4858]: I0202 17:53:00.252141 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-66tww" Feb 02 17:53:00 crc kubenswrapper[4858]: I0202 17:53:00.264111 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-66tww"] Feb 02 17:53:00 crc kubenswrapper[4858]: I0202 17:53:00.284765 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/037ac85c-0e71-439a-b48d-ed2d1e0b6b37-utilities\") pod \"community-operators-66tww\" (UID: \"037ac85c-0e71-439a-b48d-ed2d1e0b6b37\") " pod="openshift-marketplace/community-operators-66tww" Feb 02 17:53:00 crc kubenswrapper[4858]: I0202 17:53:00.284822 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/037ac85c-0e71-439a-b48d-ed2d1e0b6b37-catalog-content\") pod \"community-operators-66tww\" (UID: \"037ac85c-0e71-439a-b48d-ed2d1e0b6b37\") " pod="openshift-marketplace/community-operators-66tww" Feb 02 17:53:00 crc kubenswrapper[4858]: I0202 17:53:00.284878 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwl74\" (UniqueName: \"kubernetes.io/projected/037ac85c-0e71-439a-b48d-ed2d1e0b6b37-kube-api-access-lwl74\") pod \"community-operators-66tww\" (UID: \"037ac85c-0e71-439a-b48d-ed2d1e0b6b37\") " pod="openshift-marketplace/community-operators-66tww" Feb 02 17:53:00 crc kubenswrapper[4858]: I0202 17:53:00.387525 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/037ac85c-0e71-439a-b48d-ed2d1e0b6b37-utilities\") pod \"community-operators-66tww\" (UID: \"037ac85c-0e71-439a-b48d-ed2d1e0b6b37\") " pod="openshift-marketplace/community-operators-66tww" Feb 02 17:53:00 crc kubenswrapper[4858]: I0202 17:53:00.387635 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/037ac85c-0e71-439a-b48d-ed2d1e0b6b37-catalog-content\") pod \"community-operators-66tww\" (UID: \"037ac85c-0e71-439a-b48d-ed2d1e0b6b37\") " pod="openshift-marketplace/community-operators-66tww" Feb 02 17:53:00 crc kubenswrapper[4858]: I0202 17:53:00.389239 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/037ac85c-0e71-439a-b48d-ed2d1e0b6b37-utilities\") pod \"community-operators-66tww\" (UID: \"037ac85c-0e71-439a-b48d-ed2d1e0b6b37\") " pod="openshift-marketplace/community-operators-66tww" Feb 02 17:53:00 crc kubenswrapper[4858]: I0202 17:53:00.389566 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/037ac85c-0e71-439a-b48d-ed2d1e0b6b37-catalog-content\") pod \"community-operators-66tww\" (UID: \"037ac85c-0e71-439a-b48d-ed2d1e0b6b37\") " pod="openshift-marketplace/community-operators-66tww" Feb 02 17:53:00 crc kubenswrapper[4858]: I0202 17:53:00.387719 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwl74\" (UniqueName: \"kubernetes.io/projected/037ac85c-0e71-439a-b48d-ed2d1e0b6b37-kube-api-access-lwl74\") pod \"community-operators-66tww\" (UID: \"037ac85c-0e71-439a-b48d-ed2d1e0b6b37\") " pod="openshift-marketplace/community-operators-66tww" Feb 02 17:53:00 crc kubenswrapper[4858]: I0202 17:53:00.425191 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwl74\" (UniqueName: \"kubernetes.io/projected/037ac85c-0e71-439a-b48d-ed2d1e0b6b37-kube-api-access-lwl74\") pod \"community-operators-66tww\" (UID: \"037ac85c-0e71-439a-b48d-ed2d1e0b6b37\") " pod="openshift-marketplace/community-operators-66tww" Feb 02 17:53:00 crc kubenswrapper[4858]: I0202 17:53:00.582717 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-66tww" Feb 02 17:53:01 crc kubenswrapper[4858]: I0202 17:53:01.109125 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-66tww"] Feb 02 17:53:01 crc kubenswrapper[4858]: I0202 17:53:01.241877 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-84ghb"] Feb 02 17:53:01 crc kubenswrapper[4858]: I0202 17:53:01.245540 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-84ghb" Feb 02 17:53:01 crc kubenswrapper[4858]: I0202 17:53:01.252547 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-84ghb"] Feb 02 17:53:01 crc kubenswrapper[4858]: I0202 17:53:01.315439 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ttbx\" (UniqueName: \"kubernetes.io/projected/78e0af70-0d40-47cb-83f3-23d6b133fb62-kube-api-access-8ttbx\") pod \"redhat-operators-84ghb\" (UID: \"78e0af70-0d40-47cb-83f3-23d6b133fb62\") " pod="openshift-marketplace/redhat-operators-84ghb" Feb 02 17:53:01 crc kubenswrapper[4858]: I0202 17:53:01.315679 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78e0af70-0d40-47cb-83f3-23d6b133fb62-catalog-content\") pod \"redhat-operators-84ghb\" (UID: \"78e0af70-0d40-47cb-83f3-23d6b133fb62\") " pod="openshift-marketplace/redhat-operators-84ghb" Feb 02 17:53:01 crc kubenswrapper[4858]: I0202 17:53:01.315712 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78e0af70-0d40-47cb-83f3-23d6b133fb62-utilities\") pod \"redhat-operators-84ghb\" (UID: \"78e0af70-0d40-47cb-83f3-23d6b133fb62\") " pod="openshift-marketplace/redhat-operators-84ghb" Feb 02 17:53:01 crc kubenswrapper[4858]: I0202 17:53:01.420281 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78e0af70-0d40-47cb-83f3-23d6b133fb62-catalog-content\") pod \"redhat-operators-84ghb\" (UID: \"78e0af70-0d40-47cb-83f3-23d6b133fb62\") " pod="openshift-marketplace/redhat-operators-84ghb" Feb 02 17:53:01 crc kubenswrapper[4858]: I0202 17:53:01.420336 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78e0af70-0d40-47cb-83f3-23d6b133fb62-utilities\") pod \"redhat-operators-84ghb\" (UID: \"78e0af70-0d40-47cb-83f3-23d6b133fb62\") " pod="openshift-marketplace/redhat-operators-84ghb" Feb 02 17:53:01 crc kubenswrapper[4858]: I0202 17:53:01.420452 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ttbx\" (UniqueName: \"kubernetes.io/projected/78e0af70-0d40-47cb-83f3-23d6b133fb62-kube-api-access-8ttbx\") pod \"redhat-operators-84ghb\" (UID: \"78e0af70-0d40-47cb-83f3-23d6b133fb62\") " pod="openshift-marketplace/redhat-operators-84ghb" Feb 02 17:53:01 crc kubenswrapper[4858]: I0202 17:53:01.420803 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78e0af70-0d40-47cb-83f3-23d6b133fb62-catalog-content\") pod \"redhat-operators-84ghb\" (UID: \"78e0af70-0d40-47cb-83f3-23d6b133fb62\") " pod="openshift-marketplace/redhat-operators-84ghb" Feb 02 17:53:01 crc kubenswrapper[4858]: I0202 17:53:01.421331 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78e0af70-0d40-47cb-83f3-23d6b133fb62-utilities\") pod \"redhat-operators-84ghb\" (UID: \"78e0af70-0d40-47cb-83f3-23d6b133fb62\") " pod="openshift-marketplace/redhat-operators-84ghb" Feb 02 17:53:01 crc kubenswrapper[4858]: I0202 17:53:01.442150 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ttbx\" (UniqueName: \"kubernetes.io/projected/78e0af70-0d40-47cb-83f3-23d6b133fb62-kube-api-access-8ttbx\") pod \"redhat-operators-84ghb\" (UID: \"78e0af70-0d40-47cb-83f3-23d6b133fb62\") " pod="openshift-marketplace/redhat-operators-84ghb" Feb 02 17:53:01 crc kubenswrapper[4858]: I0202 17:53:01.577511 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-84ghb" Feb 02 17:53:01 crc kubenswrapper[4858]: I0202 17:53:01.817687 4858 generic.go:334] "Generic (PLEG): container finished" podID="037ac85c-0e71-439a-b48d-ed2d1e0b6b37" containerID="3f881245498ae9b0b37bd447ded365548b8ede3ed2d3961704b413205d00612f" exitCode=0 Feb 02 17:53:01 crc kubenswrapper[4858]: I0202 17:53:01.817845 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-66tww" event={"ID":"037ac85c-0e71-439a-b48d-ed2d1e0b6b37","Type":"ContainerDied","Data":"3f881245498ae9b0b37bd447ded365548b8ede3ed2d3961704b413205d00612f"} Feb 02 17:53:01 crc kubenswrapper[4858]: I0202 17:53:01.818018 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-66tww" event={"ID":"037ac85c-0e71-439a-b48d-ed2d1e0b6b37","Type":"ContainerStarted","Data":"17d685d367f1511b4a55e06833841204f318d682ccdf587547b59fdba54b16c1"} Feb 02 17:53:02 crc kubenswrapper[4858]: I0202 17:53:02.058599 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-84ghb"] Feb 02 17:53:02 crc kubenswrapper[4858]: I0202 17:53:02.644968 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xqlsw"] Feb 02 17:53:02 crc kubenswrapper[4858]: I0202 17:53:02.647451 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xqlsw" Feb 02 17:53:02 crc kubenswrapper[4858]: I0202 17:53:02.662834 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xqlsw"] Feb 02 17:53:02 crc kubenswrapper[4858]: I0202 17:53:02.744358 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6cj4\" (UniqueName: \"kubernetes.io/projected/4975dd51-bde5-4e35-a4a7-4f664e2bf729-kube-api-access-q6cj4\") pod \"certified-operators-xqlsw\" (UID: \"4975dd51-bde5-4e35-a4a7-4f664e2bf729\") " pod="openshift-marketplace/certified-operators-xqlsw" Feb 02 17:53:02 crc kubenswrapper[4858]: I0202 17:53:02.744478 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4975dd51-bde5-4e35-a4a7-4f664e2bf729-utilities\") pod \"certified-operators-xqlsw\" (UID: \"4975dd51-bde5-4e35-a4a7-4f664e2bf729\") " pod="openshift-marketplace/certified-operators-xqlsw" Feb 02 17:53:02 crc kubenswrapper[4858]: I0202 17:53:02.744745 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4975dd51-bde5-4e35-a4a7-4f664e2bf729-catalog-content\") pod \"certified-operators-xqlsw\" (UID: \"4975dd51-bde5-4e35-a4a7-4f664e2bf729\") " pod="openshift-marketplace/certified-operators-xqlsw" Feb 02 17:53:02 crc kubenswrapper[4858]: I0202 17:53:02.828883 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-66tww" event={"ID":"037ac85c-0e71-439a-b48d-ed2d1e0b6b37","Type":"ContainerStarted","Data":"d0099a17d15e6478e4e0cb9dbd619ce8a0533fb66f16b44f83fd228218c8778a"} Feb 02 17:53:02 crc kubenswrapper[4858]: I0202 17:53:02.830909 4858 generic.go:334] "Generic (PLEG): container finished" podID="78e0af70-0d40-47cb-83f3-23d6b133fb62" containerID="d456fa3f9b94f78d45d0630bd6d6099f7fa3dea5b68620c9fbfab9af420c28a6" exitCode=0 Feb 02 17:53:02 crc kubenswrapper[4858]: I0202 17:53:02.830986 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-84ghb" event={"ID":"78e0af70-0d40-47cb-83f3-23d6b133fb62","Type":"ContainerDied","Data":"d456fa3f9b94f78d45d0630bd6d6099f7fa3dea5b68620c9fbfab9af420c28a6"} Feb 02 17:53:02 crc kubenswrapper[4858]: I0202 17:53:02.831060 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-84ghb" event={"ID":"78e0af70-0d40-47cb-83f3-23d6b133fb62","Type":"ContainerStarted","Data":"bfbeff1d2724af0469c9ed58cdc9f2df63b5b5d3c2ca2adf5642a8e917a7982d"} Feb 02 17:53:02 crc kubenswrapper[4858]: I0202 17:53:02.846801 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4975dd51-bde5-4e35-a4a7-4f664e2bf729-catalog-content\") pod \"certified-operators-xqlsw\" (UID: \"4975dd51-bde5-4e35-a4a7-4f664e2bf729\") " pod="openshift-marketplace/certified-operators-xqlsw" Feb 02 17:53:02 crc kubenswrapper[4858]: I0202 17:53:02.846849 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6cj4\" (UniqueName: \"kubernetes.io/projected/4975dd51-bde5-4e35-a4a7-4f664e2bf729-kube-api-access-q6cj4\") pod \"certified-operators-xqlsw\" (UID: \"4975dd51-bde5-4e35-a4a7-4f664e2bf729\") " pod="openshift-marketplace/certified-operators-xqlsw" Feb 02 17:53:02 crc kubenswrapper[4858]: I0202 17:53:02.846915 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4975dd51-bde5-4e35-a4a7-4f664e2bf729-utilities\") pod \"certified-operators-xqlsw\" (UID: \"4975dd51-bde5-4e35-a4a7-4f664e2bf729\") " pod="openshift-marketplace/certified-operators-xqlsw" Feb 02 17:53:02 crc kubenswrapper[4858]: I0202 17:53:02.847712 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4975dd51-bde5-4e35-a4a7-4f664e2bf729-utilities\") pod \"certified-operators-xqlsw\" (UID: \"4975dd51-bde5-4e35-a4a7-4f664e2bf729\") " pod="openshift-marketplace/certified-operators-xqlsw" Feb 02 17:53:02 crc kubenswrapper[4858]: I0202 17:53:02.847736 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4975dd51-bde5-4e35-a4a7-4f664e2bf729-catalog-content\") pod \"certified-operators-xqlsw\" (UID: \"4975dd51-bde5-4e35-a4a7-4f664e2bf729\") " pod="openshift-marketplace/certified-operators-xqlsw" Feb 02 17:53:02 crc kubenswrapper[4858]: I0202 17:53:02.893436 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6cj4\" (UniqueName: \"kubernetes.io/projected/4975dd51-bde5-4e35-a4a7-4f664e2bf729-kube-api-access-q6cj4\") pod \"certified-operators-xqlsw\" (UID: \"4975dd51-bde5-4e35-a4a7-4f664e2bf729\") " pod="openshift-marketplace/certified-operators-xqlsw" Feb 02 17:53:02 crc kubenswrapper[4858]: I0202 17:53:02.969647 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xqlsw" Feb 02 17:53:03 crc kubenswrapper[4858]: I0202 17:53:03.559240 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xqlsw"] Feb 02 17:53:03 crc kubenswrapper[4858]: I0202 17:53:03.842037 4858 generic.go:334] "Generic (PLEG): container finished" podID="4975dd51-bde5-4e35-a4a7-4f664e2bf729" containerID="fafaec843f27eeeb233cf6004653060e301539ec4ee99fba6570931081d24d43" exitCode=0 Feb 02 17:53:03 crc kubenswrapper[4858]: I0202 17:53:03.842109 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xqlsw" event={"ID":"4975dd51-bde5-4e35-a4a7-4f664e2bf729","Type":"ContainerDied","Data":"fafaec843f27eeeb233cf6004653060e301539ec4ee99fba6570931081d24d43"} Feb 02 17:53:03 crc kubenswrapper[4858]: I0202 17:53:03.842520 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xqlsw" event={"ID":"4975dd51-bde5-4e35-a4a7-4f664e2bf729","Type":"ContainerStarted","Data":"021536dbff4743c5d24fb6b15e456bfaf414c641c0b600c621801a93cbe16e92"} Feb 02 17:53:03 crc kubenswrapper[4858]: I0202 17:53:03.845374 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-84ghb" event={"ID":"78e0af70-0d40-47cb-83f3-23d6b133fb62","Type":"ContainerStarted","Data":"4c3a7d8ee896cc75c66d588d8f2380276f3e52b83e32f09a2c655b67e879ad13"} Feb 02 17:53:03 crc kubenswrapper[4858]: I0202 17:53:03.850023 4858 generic.go:334] "Generic (PLEG): container finished" podID="037ac85c-0e71-439a-b48d-ed2d1e0b6b37" containerID="d0099a17d15e6478e4e0cb9dbd619ce8a0533fb66f16b44f83fd228218c8778a" exitCode=0 Feb 02 17:53:03 crc kubenswrapper[4858]: I0202 17:53:03.850095 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-66tww" event={"ID":"037ac85c-0e71-439a-b48d-ed2d1e0b6b37","Type":"ContainerDied","Data":"d0099a17d15e6478e4e0cb9dbd619ce8a0533fb66f16b44f83fd228218c8778a"} Feb 02 17:53:04 crc kubenswrapper[4858]: I0202 17:53:04.863123 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-66tww" event={"ID":"037ac85c-0e71-439a-b48d-ed2d1e0b6b37","Type":"ContainerStarted","Data":"aea8bae747695e908113ccc41fcc63d7257a58fcf477df803c8ddd2b6b59c1ab"} Feb 02 17:53:04 crc kubenswrapper[4858]: I0202 17:53:04.865859 4858 generic.go:334] "Generic (PLEG): container finished" podID="78e0af70-0d40-47cb-83f3-23d6b133fb62" containerID="4c3a7d8ee896cc75c66d588d8f2380276f3e52b83e32f09a2c655b67e879ad13" exitCode=0 Feb 02 17:53:04 crc kubenswrapper[4858]: I0202 17:53:04.865906 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-84ghb" event={"ID":"78e0af70-0d40-47cb-83f3-23d6b133fb62","Type":"ContainerDied","Data":"4c3a7d8ee896cc75c66d588d8f2380276f3e52b83e32f09a2c655b67e879ad13"} Feb 02 17:53:04 crc kubenswrapper[4858]: I0202 17:53:04.889285 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-66tww" podStartSLOduration=2.130543356 podStartE2EDuration="4.889263393s" podCreationTimestamp="2026-02-02 17:53:00 +0000 UTC" firstStartedPulling="2026-02-02 17:53:01.819525529 +0000 UTC m=+2282.971940794" lastFinishedPulling="2026-02-02 17:53:04.578245566 +0000 UTC m=+2285.730660831" observedRunningTime="2026-02-02 17:53:04.885883923 +0000 UTC m=+2286.038299188" watchObservedRunningTime="2026-02-02 17:53:04.889263393 +0000 UTC m=+2286.041678668" Feb 02 17:53:05 crc kubenswrapper[4858]: I0202 17:53:05.877199 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xqlsw" event={"ID":"4975dd51-bde5-4e35-a4a7-4f664e2bf729","Type":"ContainerStarted","Data":"303420c3833f32ef3bdbfbb535e12767aface9130d8ec6e3d92b51a05869f321"} Feb 02 17:53:05 crc kubenswrapper[4858]: I0202 17:53:05.880474 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-84ghb" event={"ID":"78e0af70-0d40-47cb-83f3-23d6b133fb62","Type":"ContainerStarted","Data":"eb352d10480e750ad565c3a059aafc066206e8c95cde72095ced4705b8cea697"} Feb 02 17:53:07 crc kubenswrapper[4858]: I0202 17:53:07.906994 4858 generic.go:334] "Generic (PLEG): container finished" podID="4975dd51-bde5-4e35-a4a7-4f664e2bf729" containerID="303420c3833f32ef3bdbfbb535e12767aface9130d8ec6e3d92b51a05869f321" exitCode=0 Feb 02 17:53:07 crc kubenswrapper[4858]: I0202 17:53:07.907803 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xqlsw" event={"ID":"4975dd51-bde5-4e35-a4a7-4f664e2bf729","Type":"ContainerDied","Data":"303420c3833f32ef3bdbfbb535e12767aface9130d8ec6e3d92b51a05869f321"} Feb 02 17:53:07 crc kubenswrapper[4858]: I0202 17:53:07.936259 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-84ghb" podStartSLOduration=4.42634716 podStartE2EDuration="6.936239223s" podCreationTimestamp="2026-02-02 17:53:01 +0000 UTC" firstStartedPulling="2026-02-02 17:53:02.83475717 +0000 UTC m=+2283.987172435" lastFinishedPulling="2026-02-02 17:53:05.344649233 +0000 UTC m=+2286.497064498" observedRunningTime="2026-02-02 17:53:05.95458963 +0000 UTC m=+2287.107004895" watchObservedRunningTime="2026-02-02 17:53:07.936239223 +0000 UTC m=+2289.088654488" Feb 02 17:53:09 crc kubenswrapper[4858]: I0202 17:53:09.931996 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xqlsw" event={"ID":"4975dd51-bde5-4e35-a4a7-4f664e2bf729","Type":"ContainerStarted","Data":"82d6c8a58c01dbc07f641d49a16ccfe6ac7f50c6f9dcb9d5e39c1d511d147506"} Feb 02 17:53:09 crc kubenswrapper[4858]: I0202 17:53:09.963896 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xqlsw" podStartSLOduration=3.093315008 podStartE2EDuration="7.96384971s" podCreationTimestamp="2026-02-02 17:53:02 +0000 UTC" firstStartedPulling="2026-02-02 17:53:03.844399214 +0000 UTC m=+2284.996814479" lastFinishedPulling="2026-02-02 17:53:08.714933916 +0000 UTC m=+2289.867349181" observedRunningTime="2026-02-02 17:53:09.953598956 +0000 UTC m=+2291.106014221" watchObservedRunningTime="2026-02-02 17:53:09.96384971 +0000 UTC m=+2291.116264975" Feb 02 17:53:10 crc kubenswrapper[4858]: I0202 17:53:10.583396 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-66tww" Feb 02 17:53:10 crc kubenswrapper[4858]: I0202 17:53:10.583786 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-66tww" Feb 02 17:53:10 crc kubenswrapper[4858]: I0202 17:53:10.633424 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-66tww" Feb 02 17:53:10 crc kubenswrapper[4858]: I0202 17:53:10.990984 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-66tww" Feb 02 17:53:11 crc kubenswrapper[4858]: I0202 17:53:11.578547 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-84ghb" Feb 02 17:53:11 crc kubenswrapper[4858]: I0202 17:53:11.578677 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-84ghb" Feb 02 17:53:11 crc kubenswrapper[4858]: I0202 17:53:11.632559 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-84ghb" Feb 02 17:53:12 crc kubenswrapper[4858]: I0202 17:53:12.004010 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-84ghb" Feb 02 17:53:12 crc kubenswrapper[4858]: I0202 17:53:12.630926 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-66tww"] Feb 02 17:53:12 crc kubenswrapper[4858]: I0202 17:53:12.961322 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-66tww" podUID="037ac85c-0e71-439a-b48d-ed2d1e0b6b37" containerName="registry-server" containerID="cri-o://aea8bae747695e908113ccc41fcc63d7257a58fcf477df803c8ddd2b6b59c1ab" gracePeriod=2 Feb 02 17:53:12 crc kubenswrapper[4858]: I0202 17:53:12.970721 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xqlsw" Feb 02 17:53:12 crc kubenswrapper[4858]: I0202 17:53:12.970781 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xqlsw" Feb 02 17:53:13 crc kubenswrapper[4858]: I0202 17:53:13.029035 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xqlsw" Feb 02 17:53:13 crc kubenswrapper[4858]: I0202 17:53:13.519088 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-66tww" Feb 02 17:53:13 crc kubenswrapper[4858]: I0202 17:53:13.599914 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwl74\" (UniqueName: \"kubernetes.io/projected/037ac85c-0e71-439a-b48d-ed2d1e0b6b37-kube-api-access-lwl74\") pod \"037ac85c-0e71-439a-b48d-ed2d1e0b6b37\" (UID: \"037ac85c-0e71-439a-b48d-ed2d1e0b6b37\") " Feb 02 17:53:13 crc kubenswrapper[4858]: I0202 17:53:13.600060 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/037ac85c-0e71-439a-b48d-ed2d1e0b6b37-catalog-content\") pod \"037ac85c-0e71-439a-b48d-ed2d1e0b6b37\" (UID: \"037ac85c-0e71-439a-b48d-ed2d1e0b6b37\") " Feb 02 17:53:13 crc kubenswrapper[4858]: I0202 17:53:13.600431 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/037ac85c-0e71-439a-b48d-ed2d1e0b6b37-utilities\") pod \"037ac85c-0e71-439a-b48d-ed2d1e0b6b37\" (UID: \"037ac85c-0e71-439a-b48d-ed2d1e0b6b37\") " Feb 02 17:53:13 crc kubenswrapper[4858]: I0202 17:53:13.601718 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/037ac85c-0e71-439a-b48d-ed2d1e0b6b37-utilities" (OuterVolumeSpecName: "utilities") pod "037ac85c-0e71-439a-b48d-ed2d1e0b6b37" (UID: "037ac85c-0e71-439a-b48d-ed2d1e0b6b37"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:53:13 crc kubenswrapper[4858]: I0202 17:53:13.607392 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/037ac85c-0e71-439a-b48d-ed2d1e0b6b37-kube-api-access-lwl74" (OuterVolumeSpecName: "kube-api-access-lwl74") pod "037ac85c-0e71-439a-b48d-ed2d1e0b6b37" (UID: "037ac85c-0e71-439a-b48d-ed2d1e0b6b37"). InnerVolumeSpecName "kube-api-access-lwl74". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:53:13 crc kubenswrapper[4858]: I0202 17:53:13.660617 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/037ac85c-0e71-439a-b48d-ed2d1e0b6b37-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "037ac85c-0e71-439a-b48d-ed2d1e0b6b37" (UID: "037ac85c-0e71-439a-b48d-ed2d1e0b6b37"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:53:13 crc kubenswrapper[4858]: I0202 17:53:13.702934 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwl74\" (UniqueName: \"kubernetes.io/projected/037ac85c-0e71-439a-b48d-ed2d1e0b6b37-kube-api-access-lwl74\") on node \"crc\" DevicePath \"\"" Feb 02 17:53:13 crc kubenswrapper[4858]: I0202 17:53:13.702999 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/037ac85c-0e71-439a-b48d-ed2d1e0b6b37-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 17:53:13 crc kubenswrapper[4858]: I0202 17:53:13.703012 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/037ac85c-0e71-439a-b48d-ed2d1e0b6b37-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 17:53:13 crc kubenswrapper[4858]: I0202 17:53:13.973477 4858 generic.go:334] "Generic (PLEG): container finished" podID="037ac85c-0e71-439a-b48d-ed2d1e0b6b37" containerID="aea8bae747695e908113ccc41fcc63d7257a58fcf477df803c8ddd2b6b59c1ab" exitCode=0 Feb 02 17:53:13 crc kubenswrapper[4858]: I0202 17:53:13.973536 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-66tww" event={"ID":"037ac85c-0e71-439a-b48d-ed2d1e0b6b37","Type":"ContainerDied","Data":"aea8bae747695e908113ccc41fcc63d7257a58fcf477df803c8ddd2b6b59c1ab"} Feb 02 17:53:13 crc kubenswrapper[4858]: I0202 17:53:13.973585 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-66tww" Feb 02 17:53:13 crc kubenswrapper[4858]: I0202 17:53:13.973614 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-66tww" event={"ID":"037ac85c-0e71-439a-b48d-ed2d1e0b6b37","Type":"ContainerDied","Data":"17d685d367f1511b4a55e06833841204f318d682ccdf587547b59fdba54b16c1"} Feb 02 17:53:13 crc kubenswrapper[4858]: I0202 17:53:13.973637 4858 scope.go:117] "RemoveContainer" containerID="aea8bae747695e908113ccc41fcc63d7257a58fcf477df803c8ddd2b6b59c1ab" Feb 02 17:53:14 crc kubenswrapper[4858]: I0202 17:53:14.017402 4858 scope.go:117] "RemoveContainer" containerID="d0099a17d15e6478e4e0cb9dbd619ce8a0533fb66f16b44f83fd228218c8778a" Feb 02 17:53:14 crc kubenswrapper[4858]: I0202 17:53:14.027937 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-66tww"] Feb 02 17:53:14 crc kubenswrapper[4858]: I0202 17:53:14.039842 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xqlsw" Feb 02 17:53:14 crc kubenswrapper[4858]: I0202 17:53:14.043044 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-66tww"] Feb 02 17:53:14 crc kubenswrapper[4858]: I0202 17:53:14.048552 4858 scope.go:117] "RemoveContainer" containerID="3f881245498ae9b0b37bd447ded365548b8ede3ed2d3961704b413205d00612f" Feb 02 17:53:14 crc kubenswrapper[4858]: I0202 17:53:14.053463 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-84ghb"] Feb 02 17:53:14 crc kubenswrapper[4858]: I0202 17:53:14.053761 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-84ghb" podUID="78e0af70-0d40-47cb-83f3-23d6b133fb62" containerName="registry-server" containerID="cri-o://eb352d10480e750ad565c3a059aafc066206e8c95cde72095ced4705b8cea697" gracePeriod=2 Feb 02 17:53:14 crc kubenswrapper[4858]: I0202 17:53:14.106239 4858 scope.go:117] "RemoveContainer" containerID="aea8bae747695e908113ccc41fcc63d7257a58fcf477df803c8ddd2b6b59c1ab" Feb 02 17:53:14 crc kubenswrapper[4858]: E0202 17:53:14.107945 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aea8bae747695e908113ccc41fcc63d7257a58fcf477df803c8ddd2b6b59c1ab\": container with ID starting with aea8bae747695e908113ccc41fcc63d7257a58fcf477df803c8ddd2b6b59c1ab not found: ID does not exist" containerID="aea8bae747695e908113ccc41fcc63d7257a58fcf477df803c8ddd2b6b59c1ab" Feb 02 17:53:14 crc kubenswrapper[4858]: I0202 17:53:14.108021 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aea8bae747695e908113ccc41fcc63d7257a58fcf477df803c8ddd2b6b59c1ab"} err="failed to get container status \"aea8bae747695e908113ccc41fcc63d7257a58fcf477df803c8ddd2b6b59c1ab\": rpc error: code = NotFound desc = could not find container \"aea8bae747695e908113ccc41fcc63d7257a58fcf477df803c8ddd2b6b59c1ab\": container with ID starting with aea8bae747695e908113ccc41fcc63d7257a58fcf477df803c8ddd2b6b59c1ab not found: ID does not exist" Feb 02 17:53:14 crc kubenswrapper[4858]: I0202 17:53:14.108053 4858 scope.go:117] "RemoveContainer" containerID="d0099a17d15e6478e4e0cb9dbd619ce8a0533fb66f16b44f83fd228218c8778a" Feb 02 17:53:14 crc kubenswrapper[4858]: E0202 17:53:14.108678 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0099a17d15e6478e4e0cb9dbd619ce8a0533fb66f16b44f83fd228218c8778a\": container with ID starting with d0099a17d15e6478e4e0cb9dbd619ce8a0533fb66f16b44f83fd228218c8778a not found: ID does not exist" containerID="d0099a17d15e6478e4e0cb9dbd619ce8a0533fb66f16b44f83fd228218c8778a" Feb 02 17:53:14 crc kubenswrapper[4858]: I0202 17:53:14.108717 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0099a17d15e6478e4e0cb9dbd619ce8a0533fb66f16b44f83fd228218c8778a"} err="failed to get container status \"d0099a17d15e6478e4e0cb9dbd619ce8a0533fb66f16b44f83fd228218c8778a\": rpc error: code = NotFound desc = could not find container \"d0099a17d15e6478e4e0cb9dbd619ce8a0533fb66f16b44f83fd228218c8778a\": container with ID starting with d0099a17d15e6478e4e0cb9dbd619ce8a0533fb66f16b44f83fd228218c8778a not found: ID does not exist" Feb 02 17:53:14 crc kubenswrapper[4858]: I0202 17:53:14.108740 4858 scope.go:117] "RemoveContainer" containerID="3f881245498ae9b0b37bd447ded365548b8ede3ed2d3961704b413205d00612f" Feb 02 17:53:14 crc kubenswrapper[4858]: E0202 17:53:14.109349 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f881245498ae9b0b37bd447ded365548b8ede3ed2d3961704b413205d00612f\": container with ID starting with 3f881245498ae9b0b37bd447ded365548b8ede3ed2d3961704b413205d00612f not found: ID does not exist" containerID="3f881245498ae9b0b37bd447ded365548b8ede3ed2d3961704b413205d00612f" Feb 02 17:53:14 crc kubenswrapper[4858]: I0202 17:53:14.109381 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f881245498ae9b0b37bd447ded365548b8ede3ed2d3961704b413205d00612f"} err="failed to get container status \"3f881245498ae9b0b37bd447ded365548b8ede3ed2d3961704b413205d00612f\": rpc error: code = NotFound desc = could not find container \"3f881245498ae9b0b37bd447ded365548b8ede3ed2d3961704b413205d00612f\": container with ID starting with 3f881245498ae9b0b37bd447ded365548b8ede3ed2d3961704b413205d00612f not found: ID does not exist" Feb 02 17:53:14 crc kubenswrapper[4858]: I0202 17:53:14.414330 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="037ac85c-0e71-439a-b48d-ed2d1e0b6b37" path="/var/lib/kubelet/pods/037ac85c-0e71-439a-b48d-ed2d1e0b6b37/volumes" Feb 02 17:53:14 crc kubenswrapper[4858]: I0202 17:53:14.591401 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-84ghb" Feb 02 17:53:14 crc kubenswrapper[4858]: I0202 17:53:14.628255 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ttbx\" (UniqueName: \"kubernetes.io/projected/78e0af70-0d40-47cb-83f3-23d6b133fb62-kube-api-access-8ttbx\") pod \"78e0af70-0d40-47cb-83f3-23d6b133fb62\" (UID: \"78e0af70-0d40-47cb-83f3-23d6b133fb62\") " Feb 02 17:53:14 crc kubenswrapper[4858]: I0202 17:53:14.628315 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78e0af70-0d40-47cb-83f3-23d6b133fb62-utilities\") pod \"78e0af70-0d40-47cb-83f3-23d6b133fb62\" (UID: \"78e0af70-0d40-47cb-83f3-23d6b133fb62\") " Feb 02 17:53:14 crc kubenswrapper[4858]: I0202 17:53:14.629275 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78e0af70-0d40-47cb-83f3-23d6b133fb62-catalog-content\") pod \"78e0af70-0d40-47cb-83f3-23d6b133fb62\" (UID: \"78e0af70-0d40-47cb-83f3-23d6b133fb62\") " Feb 02 17:53:14 crc kubenswrapper[4858]: I0202 17:53:14.630883 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78e0af70-0d40-47cb-83f3-23d6b133fb62-utilities" (OuterVolumeSpecName: "utilities") pod "78e0af70-0d40-47cb-83f3-23d6b133fb62" (UID: "78e0af70-0d40-47cb-83f3-23d6b133fb62"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:53:14 crc kubenswrapper[4858]: I0202 17:53:14.640926 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78e0af70-0d40-47cb-83f3-23d6b133fb62-kube-api-access-8ttbx" (OuterVolumeSpecName: "kube-api-access-8ttbx") pod "78e0af70-0d40-47cb-83f3-23d6b133fb62" (UID: "78e0af70-0d40-47cb-83f3-23d6b133fb62"). InnerVolumeSpecName "kube-api-access-8ttbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:53:14 crc kubenswrapper[4858]: I0202 17:53:14.732570 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ttbx\" (UniqueName: \"kubernetes.io/projected/78e0af70-0d40-47cb-83f3-23d6b133fb62-kube-api-access-8ttbx\") on node \"crc\" DevicePath \"\"" Feb 02 17:53:14 crc kubenswrapper[4858]: I0202 17:53:14.732619 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78e0af70-0d40-47cb-83f3-23d6b133fb62-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 17:53:14 crc kubenswrapper[4858]: I0202 17:53:14.792249 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78e0af70-0d40-47cb-83f3-23d6b133fb62-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "78e0af70-0d40-47cb-83f3-23d6b133fb62" (UID: "78e0af70-0d40-47cb-83f3-23d6b133fb62"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:53:14 crc kubenswrapper[4858]: I0202 17:53:14.835723 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78e0af70-0d40-47cb-83f3-23d6b133fb62-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 17:53:15 crc kubenswrapper[4858]: I0202 17:53:15.002509 4858 generic.go:334] "Generic (PLEG): container finished" podID="78e0af70-0d40-47cb-83f3-23d6b133fb62" containerID="eb352d10480e750ad565c3a059aafc066206e8c95cde72095ced4705b8cea697" exitCode=0 Feb 02 17:53:15 crc kubenswrapper[4858]: I0202 17:53:15.002692 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-84ghb" Feb 02 17:53:15 crc kubenswrapper[4858]: I0202 17:53:15.002784 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-84ghb" event={"ID":"78e0af70-0d40-47cb-83f3-23d6b133fb62","Type":"ContainerDied","Data":"eb352d10480e750ad565c3a059aafc066206e8c95cde72095ced4705b8cea697"} Feb 02 17:53:15 crc kubenswrapper[4858]: I0202 17:53:15.002838 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-84ghb" event={"ID":"78e0af70-0d40-47cb-83f3-23d6b133fb62","Type":"ContainerDied","Data":"bfbeff1d2724af0469c9ed58cdc9f2df63b5b5d3c2ca2adf5642a8e917a7982d"} Feb 02 17:53:15 crc kubenswrapper[4858]: I0202 17:53:15.002867 4858 scope.go:117] "RemoveContainer" containerID="eb352d10480e750ad565c3a059aafc066206e8c95cde72095ced4705b8cea697" Feb 02 17:53:15 crc kubenswrapper[4858]: I0202 17:53:15.029525 4858 scope.go:117] "RemoveContainer" containerID="4c3a7d8ee896cc75c66d588d8f2380276f3e52b83e32f09a2c655b67e879ad13" Feb 02 17:53:15 crc kubenswrapper[4858]: I0202 17:53:15.055727 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-84ghb"] Feb 02 17:53:15 crc kubenswrapper[4858]: I0202 17:53:15.060832 4858 scope.go:117] "RemoveContainer" containerID="d456fa3f9b94f78d45d0630bd6d6099f7fa3dea5b68620c9fbfab9af420c28a6" Feb 02 17:53:15 crc kubenswrapper[4858]: I0202 17:53:15.070584 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-84ghb"] Feb 02 17:53:15 crc kubenswrapper[4858]: I0202 17:53:15.093096 4858 scope.go:117] "RemoveContainer" containerID="eb352d10480e750ad565c3a059aafc066206e8c95cde72095ced4705b8cea697" Feb 02 17:53:15 crc kubenswrapper[4858]: E0202 17:53:15.094086 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb352d10480e750ad565c3a059aafc066206e8c95cde72095ced4705b8cea697\": container with ID starting with eb352d10480e750ad565c3a059aafc066206e8c95cde72095ced4705b8cea697 not found: ID does not exist" containerID="eb352d10480e750ad565c3a059aafc066206e8c95cde72095ced4705b8cea697" Feb 02 17:53:15 crc kubenswrapper[4858]: I0202 17:53:15.094170 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb352d10480e750ad565c3a059aafc066206e8c95cde72095ced4705b8cea697"} err="failed to get container status \"eb352d10480e750ad565c3a059aafc066206e8c95cde72095ced4705b8cea697\": rpc error: code = NotFound desc = could not find container \"eb352d10480e750ad565c3a059aafc066206e8c95cde72095ced4705b8cea697\": container with ID starting with eb352d10480e750ad565c3a059aafc066206e8c95cde72095ced4705b8cea697 not found: ID does not exist" Feb 02 17:53:15 crc kubenswrapper[4858]: I0202 17:53:15.094213 4858 scope.go:117] "RemoveContainer" containerID="4c3a7d8ee896cc75c66d588d8f2380276f3e52b83e32f09a2c655b67e879ad13" Feb 02 17:53:15 crc kubenswrapper[4858]: E0202 17:53:15.094852 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c3a7d8ee896cc75c66d588d8f2380276f3e52b83e32f09a2c655b67e879ad13\": container with ID starting with 4c3a7d8ee896cc75c66d588d8f2380276f3e52b83e32f09a2c655b67e879ad13 not found: ID does not exist" containerID="4c3a7d8ee896cc75c66d588d8f2380276f3e52b83e32f09a2c655b67e879ad13" Feb 02 17:53:15 crc kubenswrapper[4858]: I0202 17:53:15.094912 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c3a7d8ee896cc75c66d588d8f2380276f3e52b83e32f09a2c655b67e879ad13"} err="failed to get container status \"4c3a7d8ee896cc75c66d588d8f2380276f3e52b83e32f09a2c655b67e879ad13\": rpc error: code = NotFound desc = could not find container \"4c3a7d8ee896cc75c66d588d8f2380276f3e52b83e32f09a2c655b67e879ad13\": container with ID starting with 4c3a7d8ee896cc75c66d588d8f2380276f3e52b83e32f09a2c655b67e879ad13 not found: ID does not exist" Feb 02 17:53:15 crc kubenswrapper[4858]: I0202 17:53:15.094956 4858 scope.go:117] "RemoveContainer" containerID="d456fa3f9b94f78d45d0630bd6d6099f7fa3dea5b68620c9fbfab9af420c28a6" Feb 02 17:53:15 crc kubenswrapper[4858]: E0202 17:53:15.095532 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d456fa3f9b94f78d45d0630bd6d6099f7fa3dea5b68620c9fbfab9af420c28a6\": container with ID starting with d456fa3f9b94f78d45d0630bd6d6099f7fa3dea5b68620c9fbfab9af420c28a6 not found: ID does not exist" containerID="d456fa3f9b94f78d45d0630bd6d6099f7fa3dea5b68620c9fbfab9af420c28a6" Feb 02 17:53:15 crc kubenswrapper[4858]: I0202 17:53:15.095579 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d456fa3f9b94f78d45d0630bd6d6099f7fa3dea5b68620c9fbfab9af420c28a6"} err="failed to get container status \"d456fa3f9b94f78d45d0630bd6d6099f7fa3dea5b68620c9fbfab9af420c28a6\": rpc error: code = NotFound desc = could not find container \"d456fa3f9b94f78d45d0630bd6d6099f7fa3dea5b68620c9fbfab9af420c28a6\": container with ID starting with d456fa3f9b94f78d45d0630bd6d6099f7fa3dea5b68620c9fbfab9af420c28a6 not found: ID does not exist" Feb 02 17:53:16 crc kubenswrapper[4858]: I0202 17:53:16.416349 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78e0af70-0d40-47cb-83f3-23d6b133fb62" path="/var/lib/kubelet/pods/78e0af70-0d40-47cb-83f3-23d6b133fb62/volumes" Feb 02 17:53:16 crc kubenswrapper[4858]: I0202 17:53:16.435887 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xqlsw"] Feb 02 17:53:16 crc kubenswrapper[4858]: I0202 17:53:16.436320 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xqlsw" podUID="4975dd51-bde5-4e35-a4a7-4f664e2bf729" containerName="registry-server" containerID="cri-o://82d6c8a58c01dbc07f641d49a16ccfe6ac7f50c6f9dcb9d5e39c1d511d147506" gracePeriod=2 Feb 02 17:53:16 crc kubenswrapper[4858]: I0202 17:53:16.954537 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xqlsw" Feb 02 17:53:16 crc kubenswrapper[4858]: I0202 17:53:16.978324 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4975dd51-bde5-4e35-a4a7-4f664e2bf729-catalog-content\") pod \"4975dd51-bde5-4e35-a4a7-4f664e2bf729\" (UID: \"4975dd51-bde5-4e35-a4a7-4f664e2bf729\") " Feb 02 17:53:16 crc kubenswrapper[4858]: I0202 17:53:16.978564 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6cj4\" (UniqueName: \"kubernetes.io/projected/4975dd51-bde5-4e35-a4a7-4f664e2bf729-kube-api-access-q6cj4\") pod \"4975dd51-bde5-4e35-a4a7-4f664e2bf729\" (UID: \"4975dd51-bde5-4e35-a4a7-4f664e2bf729\") " Feb 02 17:53:16 crc kubenswrapper[4858]: I0202 17:53:16.978741 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4975dd51-bde5-4e35-a4a7-4f664e2bf729-utilities\") pod \"4975dd51-bde5-4e35-a4a7-4f664e2bf729\" (UID: \"4975dd51-bde5-4e35-a4a7-4f664e2bf729\") " Feb 02 17:53:16 crc kubenswrapper[4858]: I0202 17:53:16.981578 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4975dd51-bde5-4e35-a4a7-4f664e2bf729-utilities" (OuterVolumeSpecName: "utilities") pod "4975dd51-bde5-4e35-a4a7-4f664e2bf729" (UID: "4975dd51-bde5-4e35-a4a7-4f664e2bf729"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:53:17 crc kubenswrapper[4858]: I0202 17:53:17.040384 4858 generic.go:334] "Generic (PLEG): container finished" podID="4975dd51-bde5-4e35-a4a7-4f664e2bf729" containerID="82d6c8a58c01dbc07f641d49a16ccfe6ac7f50c6f9dcb9d5e39c1d511d147506" exitCode=0 Feb 02 17:53:17 crc kubenswrapper[4858]: I0202 17:53:17.041032 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xqlsw" event={"ID":"4975dd51-bde5-4e35-a4a7-4f664e2bf729","Type":"ContainerDied","Data":"82d6c8a58c01dbc07f641d49a16ccfe6ac7f50c6f9dcb9d5e39c1d511d147506"} Feb 02 17:53:17 crc kubenswrapper[4858]: I0202 17:53:17.041098 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xqlsw" event={"ID":"4975dd51-bde5-4e35-a4a7-4f664e2bf729","Type":"ContainerDied","Data":"021536dbff4743c5d24fb6b15e456bfaf414c641c0b600c621801a93cbe16e92"} Feb 02 17:53:17 crc kubenswrapper[4858]: I0202 17:53:17.041153 4858 scope.go:117] "RemoveContainer" containerID="82d6c8a58c01dbc07f641d49a16ccfe6ac7f50c6f9dcb9d5e39c1d511d147506" Feb 02 17:53:17 crc kubenswrapper[4858]: I0202 17:53:17.042071 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xqlsw" Feb 02 17:53:17 crc kubenswrapper[4858]: I0202 17:53:17.083017 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4975dd51-bde5-4e35-a4a7-4f664e2bf729-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 17:53:17 crc kubenswrapper[4858]: I0202 17:53:17.128798 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4975dd51-bde5-4e35-a4a7-4f664e2bf729-kube-api-access-q6cj4" (OuterVolumeSpecName: "kube-api-access-q6cj4") pod "4975dd51-bde5-4e35-a4a7-4f664e2bf729" (UID: "4975dd51-bde5-4e35-a4a7-4f664e2bf729"). InnerVolumeSpecName "kube-api-access-q6cj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:53:17 crc kubenswrapper[4858]: I0202 17:53:17.184312 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6cj4\" (UniqueName: \"kubernetes.io/projected/4975dd51-bde5-4e35-a4a7-4f664e2bf729-kube-api-access-q6cj4\") on node \"crc\" DevicePath \"\"" Feb 02 17:53:17 crc kubenswrapper[4858]: I0202 17:53:17.199905 4858 scope.go:117] "RemoveContainer" containerID="303420c3833f32ef3bdbfbb535e12767aface9130d8ec6e3d92b51a05869f321" Feb 02 17:53:17 crc kubenswrapper[4858]: I0202 17:53:17.230935 4858 scope.go:117] "RemoveContainer" containerID="fafaec843f27eeeb233cf6004653060e301539ec4ee99fba6570931081d24d43" Feb 02 17:53:17 crc kubenswrapper[4858]: I0202 17:53:17.274498 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4975dd51-bde5-4e35-a4a7-4f664e2bf729-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4975dd51-bde5-4e35-a4a7-4f664e2bf729" (UID: "4975dd51-bde5-4e35-a4a7-4f664e2bf729"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:53:17 crc kubenswrapper[4858]: I0202 17:53:17.280905 4858 scope.go:117] "RemoveContainer" containerID="82d6c8a58c01dbc07f641d49a16ccfe6ac7f50c6f9dcb9d5e39c1d511d147506" Feb 02 17:53:17 crc kubenswrapper[4858]: E0202 17:53:17.282880 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82d6c8a58c01dbc07f641d49a16ccfe6ac7f50c6f9dcb9d5e39c1d511d147506\": container with ID starting with 82d6c8a58c01dbc07f641d49a16ccfe6ac7f50c6f9dcb9d5e39c1d511d147506 not found: ID does not exist" containerID="82d6c8a58c01dbc07f641d49a16ccfe6ac7f50c6f9dcb9d5e39c1d511d147506" Feb 02 17:53:17 crc kubenswrapper[4858]: I0202 17:53:17.282930 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82d6c8a58c01dbc07f641d49a16ccfe6ac7f50c6f9dcb9d5e39c1d511d147506"} err="failed to get container status \"82d6c8a58c01dbc07f641d49a16ccfe6ac7f50c6f9dcb9d5e39c1d511d147506\": rpc error: code = NotFound desc = could not find container \"82d6c8a58c01dbc07f641d49a16ccfe6ac7f50c6f9dcb9d5e39c1d511d147506\": container with ID starting with 82d6c8a58c01dbc07f641d49a16ccfe6ac7f50c6f9dcb9d5e39c1d511d147506 not found: ID does not exist" Feb 02 17:53:17 crc kubenswrapper[4858]: I0202 17:53:17.282954 4858 scope.go:117] "RemoveContainer" containerID="303420c3833f32ef3bdbfbb535e12767aface9130d8ec6e3d92b51a05869f321" Feb 02 17:53:17 crc kubenswrapper[4858]: E0202 17:53:17.284860 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"303420c3833f32ef3bdbfbb535e12767aface9130d8ec6e3d92b51a05869f321\": container with ID starting with 303420c3833f32ef3bdbfbb535e12767aface9130d8ec6e3d92b51a05869f321 not found: ID does not exist" containerID="303420c3833f32ef3bdbfbb535e12767aface9130d8ec6e3d92b51a05869f321" Feb 02 17:53:17 crc kubenswrapper[4858]: I0202 17:53:17.286563 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"303420c3833f32ef3bdbfbb535e12767aface9130d8ec6e3d92b51a05869f321"} err="failed to get container status \"303420c3833f32ef3bdbfbb535e12767aface9130d8ec6e3d92b51a05869f321\": rpc error: code = NotFound desc = could not find container \"303420c3833f32ef3bdbfbb535e12767aface9130d8ec6e3d92b51a05869f321\": container with ID starting with 303420c3833f32ef3bdbfbb535e12767aface9130d8ec6e3d92b51a05869f321 not found: ID does not exist" Feb 02 17:53:17 crc kubenswrapper[4858]: I0202 17:53:17.286612 4858 scope.go:117] "RemoveContainer" containerID="fafaec843f27eeeb233cf6004653060e301539ec4ee99fba6570931081d24d43" Feb 02 17:53:17 crc kubenswrapper[4858]: I0202 17:53:17.285960 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4975dd51-bde5-4e35-a4a7-4f664e2bf729-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 17:53:17 crc kubenswrapper[4858]: E0202 17:53:17.287224 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fafaec843f27eeeb233cf6004653060e301539ec4ee99fba6570931081d24d43\": container with ID starting with fafaec843f27eeeb233cf6004653060e301539ec4ee99fba6570931081d24d43 not found: ID does not exist" containerID="fafaec843f27eeeb233cf6004653060e301539ec4ee99fba6570931081d24d43" Feb 02 17:53:17 crc kubenswrapper[4858]: I0202 17:53:17.287264 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fafaec843f27eeeb233cf6004653060e301539ec4ee99fba6570931081d24d43"} err="failed to get container status \"fafaec843f27eeeb233cf6004653060e301539ec4ee99fba6570931081d24d43\": rpc error: code = NotFound desc = could not find container \"fafaec843f27eeeb233cf6004653060e301539ec4ee99fba6570931081d24d43\": container with ID starting with fafaec843f27eeeb233cf6004653060e301539ec4ee99fba6570931081d24d43 not found: ID does not exist" Feb 02 17:53:17 crc kubenswrapper[4858]: I0202 17:53:17.382672 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xqlsw"] Feb 02 17:53:17 crc kubenswrapper[4858]: I0202 17:53:17.392146 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xqlsw"] Feb 02 17:53:18 crc kubenswrapper[4858]: I0202 17:53:18.415784 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4975dd51-bde5-4e35-a4a7-4f664e2bf729" path="/var/lib/kubelet/pods/4975dd51-bde5-4e35-a4a7-4f664e2bf729/volumes" Feb 02 17:53:57 crc kubenswrapper[4858]: I0202 17:53:57.808036 4858 patch_prober.go:28] interesting pod/machine-config-daemon-lbvl2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 17:53:57 crc kubenswrapper[4858]: I0202 17:53:57.808720 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 17:54:27 crc kubenswrapper[4858]: I0202 17:54:27.808136 4858 patch_prober.go:28] interesting pod/machine-config-daemon-lbvl2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 17:54:27 crc kubenswrapper[4858]: I0202 17:54:27.809178 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 17:54:57 crc kubenswrapper[4858]: I0202 17:54:57.808232 4858 patch_prober.go:28] interesting pod/machine-config-daemon-lbvl2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 17:54:57 crc kubenswrapper[4858]: I0202 17:54:57.808900 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 17:54:57 crc kubenswrapper[4858]: I0202 17:54:57.808951 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" Feb 02 17:54:57 crc kubenswrapper[4858]: I0202 17:54:57.809744 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"14c0595ecfc392eae362e391092ca630b3ab65f45c68441d2d3c09ae407972df"} pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 17:54:57 crc kubenswrapper[4858]: I0202 17:54:57.809804 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" containerID="cri-o://14c0595ecfc392eae362e391092ca630b3ab65f45c68441d2d3c09ae407972df" gracePeriod=600 Feb 02 17:54:57 crc kubenswrapper[4858]: E0202 17:54:57.933127 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:54:58 crc kubenswrapper[4858]: I0202 17:54:58.094229 4858 generic.go:334] "Generic (PLEG): container finished" podID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerID="14c0595ecfc392eae362e391092ca630b3ab65f45c68441d2d3c09ae407972df" exitCode=0 Feb 02 17:54:58 crc kubenswrapper[4858]: I0202 17:54:58.094288 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" event={"ID":"d03a4872-ca6a-4233-bdbf-b31f7890dc3e","Type":"ContainerDied","Data":"14c0595ecfc392eae362e391092ca630b3ab65f45c68441d2d3c09ae407972df"} Feb 02 17:54:58 crc kubenswrapper[4858]: I0202 17:54:58.094332 4858 scope.go:117] "RemoveContainer" containerID="f7314437f51bcd568115b72c0a6734244eecb76cbfce437d2cfdd1f5575dfab9" Feb 02 17:54:58 crc kubenswrapper[4858]: I0202 17:54:58.095079 4858 scope.go:117] "RemoveContainer" containerID="14c0595ecfc392eae362e391092ca630b3ab65f45c68441d2d3c09ae407972df" Feb 02 17:54:58 crc kubenswrapper[4858]: E0202 17:54:58.095352 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:55:12 crc kubenswrapper[4858]: I0202 17:55:12.401898 4858 scope.go:117] "RemoveContainer" containerID="14c0595ecfc392eae362e391092ca630b3ab65f45c68441d2d3c09ae407972df" Feb 02 17:55:12 crc kubenswrapper[4858]: E0202 17:55:12.405231 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:55:19 crc kubenswrapper[4858]: I0202 17:55:19.329127 4858 generic.go:334] "Generic (PLEG): container finished" podID="dd969e2b-6db6-4175-8fa3-7dfa60a198ca" containerID="ba22b0a111d57f4e88302f300505b24d4cfcc03508d41e4d34742502fed729b6" exitCode=0 Feb 02 17:55:19 crc kubenswrapper[4858]: I0202 17:55:19.329239 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-spthf" event={"ID":"dd969e2b-6db6-4175-8fa3-7dfa60a198ca","Type":"ContainerDied","Data":"ba22b0a111d57f4e88302f300505b24d4cfcc03508d41e4d34742502fed729b6"} Feb 02 17:55:20 crc kubenswrapper[4858]: I0202 17:55:20.852496 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-spthf" Feb 02 17:55:20 crc kubenswrapper[4858]: I0202 17:55:20.996171 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/dd969e2b-6db6-4175-8fa3-7dfa60a198ca-ceilometer-compute-config-data-0\") pod \"dd969e2b-6db6-4175-8fa3-7dfa60a198ca\" (UID: \"dd969e2b-6db6-4175-8fa3-7dfa60a198ca\") " Feb 02 17:55:20 crc kubenswrapper[4858]: I0202 17:55:20.996324 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/dd969e2b-6db6-4175-8fa3-7dfa60a198ca-ceilometer-compute-config-data-1\") pod \"dd969e2b-6db6-4175-8fa3-7dfa60a198ca\" (UID: \"dd969e2b-6db6-4175-8fa3-7dfa60a198ca\") " Feb 02 17:55:20 crc kubenswrapper[4858]: I0202 17:55:20.996407 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd969e2b-6db6-4175-8fa3-7dfa60a198ca-ssh-key-openstack-edpm-ipam\") pod \"dd969e2b-6db6-4175-8fa3-7dfa60a198ca\" (UID: \"dd969e2b-6db6-4175-8fa3-7dfa60a198ca\") " Feb 02 17:55:20 crc kubenswrapper[4858]: I0202 17:55:20.996526 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd969e2b-6db6-4175-8fa3-7dfa60a198ca-telemetry-combined-ca-bundle\") pod \"dd969e2b-6db6-4175-8fa3-7dfa60a198ca\" (UID: \"dd969e2b-6db6-4175-8fa3-7dfa60a198ca\") " Feb 02 17:55:20 crc kubenswrapper[4858]: I0202 17:55:20.996555 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/dd969e2b-6db6-4175-8fa3-7dfa60a198ca-ceilometer-compute-config-data-2\") pod \"dd969e2b-6db6-4175-8fa3-7dfa60a198ca\" (UID: \"dd969e2b-6db6-4175-8fa3-7dfa60a198ca\") " Feb 02 17:55:20 crc kubenswrapper[4858]: I0202 17:55:20.996671 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd969e2b-6db6-4175-8fa3-7dfa60a198ca-inventory\") pod \"dd969e2b-6db6-4175-8fa3-7dfa60a198ca\" (UID: \"dd969e2b-6db6-4175-8fa3-7dfa60a198ca\") " Feb 02 17:55:20 crc kubenswrapper[4858]: I0202 17:55:20.996908 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcs47\" (UniqueName: \"kubernetes.io/projected/dd969e2b-6db6-4175-8fa3-7dfa60a198ca-kube-api-access-mcs47\") pod \"dd969e2b-6db6-4175-8fa3-7dfa60a198ca\" (UID: \"dd969e2b-6db6-4175-8fa3-7dfa60a198ca\") " Feb 02 17:55:21 crc kubenswrapper[4858]: I0202 17:55:21.005907 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd969e2b-6db6-4175-8fa3-7dfa60a198ca-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "dd969e2b-6db6-4175-8fa3-7dfa60a198ca" (UID: "dd969e2b-6db6-4175-8fa3-7dfa60a198ca"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:55:21 crc kubenswrapper[4858]: I0202 17:55:21.007339 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd969e2b-6db6-4175-8fa3-7dfa60a198ca-kube-api-access-mcs47" (OuterVolumeSpecName: "kube-api-access-mcs47") pod "dd969e2b-6db6-4175-8fa3-7dfa60a198ca" (UID: "dd969e2b-6db6-4175-8fa3-7dfa60a198ca"). InnerVolumeSpecName "kube-api-access-mcs47". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:55:21 crc kubenswrapper[4858]: I0202 17:55:21.031082 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd969e2b-6db6-4175-8fa3-7dfa60a198ca-inventory" (OuterVolumeSpecName: "inventory") pod "dd969e2b-6db6-4175-8fa3-7dfa60a198ca" (UID: "dd969e2b-6db6-4175-8fa3-7dfa60a198ca"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:55:21 crc kubenswrapper[4858]: I0202 17:55:21.031944 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd969e2b-6db6-4175-8fa3-7dfa60a198ca-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "dd969e2b-6db6-4175-8fa3-7dfa60a198ca" (UID: "dd969e2b-6db6-4175-8fa3-7dfa60a198ca"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:55:21 crc kubenswrapper[4858]: I0202 17:55:21.037180 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd969e2b-6db6-4175-8fa3-7dfa60a198ca-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "dd969e2b-6db6-4175-8fa3-7dfa60a198ca" (UID: "dd969e2b-6db6-4175-8fa3-7dfa60a198ca"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:55:21 crc kubenswrapper[4858]: I0202 17:55:21.039315 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd969e2b-6db6-4175-8fa3-7dfa60a198ca-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dd969e2b-6db6-4175-8fa3-7dfa60a198ca" (UID: "dd969e2b-6db6-4175-8fa3-7dfa60a198ca"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:55:21 crc kubenswrapper[4858]: I0202 17:55:21.039822 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd969e2b-6db6-4175-8fa3-7dfa60a198ca-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "dd969e2b-6db6-4175-8fa3-7dfa60a198ca" (UID: "dd969e2b-6db6-4175-8fa3-7dfa60a198ca"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 17:55:21 crc kubenswrapper[4858]: I0202 17:55:21.101570 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd969e2b-6db6-4175-8fa3-7dfa60a198ca-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 02 17:55:21 crc kubenswrapper[4858]: I0202 17:55:21.102182 4858 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd969e2b-6db6-4175-8fa3-7dfa60a198ca-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 17:55:21 crc kubenswrapper[4858]: I0202 17:55:21.102203 4858 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/dd969e2b-6db6-4175-8fa3-7dfa60a198ca-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 02 17:55:21 crc kubenswrapper[4858]: I0202 17:55:21.102223 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd969e2b-6db6-4175-8fa3-7dfa60a198ca-inventory\") on node \"crc\" DevicePath \"\"" Feb 02 17:55:21 crc kubenswrapper[4858]: I0202 17:55:21.102242 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcs47\" (UniqueName: \"kubernetes.io/projected/dd969e2b-6db6-4175-8fa3-7dfa60a198ca-kube-api-access-mcs47\") on node \"crc\" DevicePath \"\"" Feb 02 17:55:21 crc kubenswrapper[4858]: I0202 17:55:21.102257 4858 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/dd969e2b-6db6-4175-8fa3-7dfa60a198ca-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 02 17:55:21 crc kubenswrapper[4858]: I0202 17:55:21.102270 4858 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/dd969e2b-6db6-4175-8fa3-7dfa60a198ca-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 02 17:55:21 crc kubenswrapper[4858]: I0202 17:55:21.352269 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-spthf" event={"ID":"dd969e2b-6db6-4175-8fa3-7dfa60a198ca","Type":"ContainerDied","Data":"e3dbe4147d375ff1d5627bcd4383d787985b57b5361df3a451529579d3f49744"} Feb 02 17:55:21 crc kubenswrapper[4858]: I0202 17:55:21.352353 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3dbe4147d375ff1d5627bcd4383d787985b57b5361df3a451529579d3f49744" Feb 02 17:55:21 crc kubenswrapper[4858]: I0202 17:55:21.352458 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-spthf" Feb 02 17:55:24 crc kubenswrapper[4858]: I0202 17:55:24.401570 4858 scope.go:117] "RemoveContainer" containerID="14c0595ecfc392eae362e391092ca630b3ab65f45c68441d2d3c09ae407972df" Feb 02 17:55:24 crc kubenswrapper[4858]: E0202 17:55:24.402456 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:55:36 crc kubenswrapper[4858]: I0202 17:55:36.400462 4858 scope.go:117] "RemoveContainer" containerID="14c0595ecfc392eae362e391092ca630b3ab65f45c68441d2d3c09ae407972df" Feb 02 17:55:36 crc kubenswrapper[4858]: E0202 17:55:36.401305 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:55:49 crc kubenswrapper[4858]: I0202 17:55:49.401101 4858 scope.go:117] "RemoveContainer" containerID="14c0595ecfc392eae362e391092ca630b3ab65f45c68441d2d3c09ae407972df" Feb 02 17:55:49 crc kubenswrapper[4858]: E0202 17:55:49.401847 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:56:02 crc kubenswrapper[4858]: I0202 17:56:02.401430 4858 scope.go:117] "RemoveContainer" containerID="14c0595ecfc392eae362e391092ca630b3ab65f45c68441d2d3c09ae407972df" Feb 02 17:56:02 crc kubenswrapper[4858]: E0202 17:56:02.402863 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:56:12 crc kubenswrapper[4858]: I0202 17:56:12.777932 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 02 17:56:12 crc kubenswrapper[4858]: E0202 17:56:12.778825 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="037ac85c-0e71-439a-b48d-ed2d1e0b6b37" containerName="extract-utilities" Feb 02 17:56:12 crc kubenswrapper[4858]: I0202 17:56:12.778843 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="037ac85c-0e71-439a-b48d-ed2d1e0b6b37" containerName="extract-utilities" Feb 02 17:56:12 crc kubenswrapper[4858]: E0202 17:56:12.778861 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="037ac85c-0e71-439a-b48d-ed2d1e0b6b37" containerName="extract-content" Feb 02 17:56:12 crc kubenswrapper[4858]: I0202 17:56:12.778869 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="037ac85c-0e71-439a-b48d-ed2d1e0b6b37" containerName="extract-content" Feb 02 17:56:12 crc kubenswrapper[4858]: E0202 17:56:12.778890 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78e0af70-0d40-47cb-83f3-23d6b133fb62" containerName="extract-utilities" Feb 02 17:56:12 crc kubenswrapper[4858]: I0202 17:56:12.778898 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="78e0af70-0d40-47cb-83f3-23d6b133fb62" containerName="extract-utilities" Feb 02 17:56:12 crc kubenswrapper[4858]: E0202 17:56:12.778914 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78e0af70-0d40-47cb-83f3-23d6b133fb62" containerName="registry-server" Feb 02 17:56:12 crc kubenswrapper[4858]: I0202 17:56:12.778924 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="78e0af70-0d40-47cb-83f3-23d6b133fb62" containerName="registry-server" Feb 02 17:56:12 crc kubenswrapper[4858]: E0202 17:56:12.778934 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4975dd51-bde5-4e35-a4a7-4f664e2bf729" containerName="extract-content" Feb 02 17:56:12 crc kubenswrapper[4858]: I0202 17:56:12.778941 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4975dd51-bde5-4e35-a4a7-4f664e2bf729" containerName="extract-content" Feb 02 17:56:12 crc kubenswrapper[4858]: E0202 17:56:12.778960 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78e0af70-0d40-47cb-83f3-23d6b133fb62" containerName="extract-content" Feb 02 17:56:12 crc kubenswrapper[4858]: I0202 17:56:12.778967 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="78e0af70-0d40-47cb-83f3-23d6b133fb62" containerName="extract-content" Feb 02 17:56:12 crc kubenswrapper[4858]: E0202 17:56:12.778995 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4975dd51-bde5-4e35-a4a7-4f664e2bf729" containerName="registry-server" Feb 02 17:56:12 crc kubenswrapper[4858]: I0202 17:56:12.779002 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4975dd51-bde5-4e35-a4a7-4f664e2bf729" containerName="registry-server" Feb 02 17:56:12 crc kubenswrapper[4858]: E0202 17:56:12.779022 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4975dd51-bde5-4e35-a4a7-4f664e2bf729" containerName="extract-utilities" Feb 02 17:56:12 crc kubenswrapper[4858]: I0202 17:56:12.779030 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4975dd51-bde5-4e35-a4a7-4f664e2bf729" containerName="extract-utilities" Feb 02 17:56:12 crc kubenswrapper[4858]: E0202 17:56:12.779046 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd969e2b-6db6-4175-8fa3-7dfa60a198ca" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 02 17:56:12 crc kubenswrapper[4858]: I0202 17:56:12.779056 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd969e2b-6db6-4175-8fa3-7dfa60a198ca" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 02 17:56:12 crc kubenswrapper[4858]: E0202 17:56:12.779075 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="037ac85c-0e71-439a-b48d-ed2d1e0b6b37" containerName="registry-server" Feb 02 17:56:12 crc kubenswrapper[4858]: I0202 17:56:12.779081 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="037ac85c-0e71-439a-b48d-ed2d1e0b6b37" containerName="registry-server" Feb 02 17:56:12 crc kubenswrapper[4858]: I0202 17:56:12.779295 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="4975dd51-bde5-4e35-a4a7-4f664e2bf729" containerName="registry-server" Feb 02 17:56:12 crc kubenswrapper[4858]: I0202 17:56:12.779314 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd969e2b-6db6-4175-8fa3-7dfa60a198ca" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 02 17:56:12 crc kubenswrapper[4858]: I0202 17:56:12.779327 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="78e0af70-0d40-47cb-83f3-23d6b133fb62" containerName="registry-server" Feb 02 17:56:12 crc kubenswrapper[4858]: I0202 17:56:12.779343 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="037ac85c-0e71-439a-b48d-ed2d1e0b6b37" containerName="registry-server" Feb 02 17:56:12 crc kubenswrapper[4858]: I0202 17:56:12.780150 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 02 17:56:12 crc kubenswrapper[4858]: I0202 17:56:12.783330 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 02 17:56:12 crc kubenswrapper[4858]: I0202 17:56:12.783542 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-scf2w" Feb 02 17:56:12 crc kubenswrapper[4858]: I0202 17:56:12.783708 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 02 17:56:12 crc kubenswrapper[4858]: I0202 17:56:12.785061 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 02 17:56:12 crc kubenswrapper[4858]: I0202 17:56:12.786675 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 02 17:56:12 crc kubenswrapper[4858]: I0202 17:56:12.809691 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-config-data\") pod \"tempest-tests-tempest\" (UID: \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\") " pod="openstack/tempest-tests-tempest" Feb 02 17:56:12 crc kubenswrapper[4858]: I0202 17:56:12.809936 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\") " pod="openstack/tempest-tests-tempest" Feb 02 17:56:12 crc kubenswrapper[4858]: I0202 17:56:12.810004 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\") " pod="openstack/tempest-tests-tempest" Feb 02 17:56:12 crc kubenswrapper[4858]: I0202 17:56:12.911675 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4q2n\" (UniqueName: \"kubernetes.io/projected/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-kube-api-access-h4q2n\") pod \"tempest-tests-tempest\" (UID: \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\") " pod="openstack/tempest-tests-tempest" Feb 02 17:56:12 crc kubenswrapper[4858]: I0202 17:56:12.911745 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\") " pod="openstack/tempest-tests-tempest" Feb 02 17:56:12 crc kubenswrapper[4858]: I0202 17:56:12.911779 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\") " pod="openstack/tempest-tests-tempest" Feb 02 17:56:12 crc kubenswrapper[4858]: I0202 17:56:12.911801 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\") " pod="openstack/tempest-tests-tempest" Feb 02 17:56:12 crc kubenswrapper[4858]: I0202 17:56:12.911862 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\") " pod="openstack/tempest-tests-tempest" Feb 02 17:56:12 crc kubenswrapper[4858]: I0202 17:56:12.912998 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\") " pod="openstack/tempest-tests-tempest" Feb 02 17:56:12 crc kubenswrapper[4858]: I0202 17:56:12.913067 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\") " pod="openstack/tempest-tests-tempest" Feb 02 17:56:12 crc kubenswrapper[4858]: I0202 17:56:12.913121 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\") " pod="openstack/tempest-tests-tempest" Feb 02 17:56:12 crc kubenswrapper[4858]: I0202 17:56:12.913182 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-config-data\") pod \"tempest-tests-tempest\" (UID: \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\") " pod="openstack/tempest-tests-tempest" Feb 02 17:56:12 crc kubenswrapper[4858]: I0202 17:56:12.914820 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\") " pod="openstack/tempest-tests-tempest" Feb 02 17:56:12 crc kubenswrapper[4858]: I0202 17:56:12.915219 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-config-data\") pod \"tempest-tests-tempest\" (UID: \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\") " pod="openstack/tempest-tests-tempest" Feb 02 17:56:12 crc kubenswrapper[4858]: I0202 17:56:12.918374 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\") " pod="openstack/tempest-tests-tempest" Feb 02 17:56:13 crc kubenswrapper[4858]: I0202 17:56:13.014721 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\") " pod="openstack/tempest-tests-tempest" Feb 02 17:56:13 crc kubenswrapper[4858]: I0202 17:56:13.014806 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\") " pod="openstack/tempest-tests-tempest" Feb 02 17:56:13 crc kubenswrapper[4858]: I0202 17:56:13.014932 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4q2n\" (UniqueName: \"kubernetes.io/projected/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-kube-api-access-h4q2n\") pod \"tempest-tests-tempest\" (UID: \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\") " pod="openstack/tempest-tests-tempest" Feb 02 17:56:13 crc kubenswrapper[4858]: I0202 17:56:13.014965 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\") " pod="openstack/tempest-tests-tempest" Feb 02 17:56:13 crc kubenswrapper[4858]: I0202 17:56:13.015124 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\") " pod="openstack/tempest-tests-tempest" Feb 02 17:56:13 crc kubenswrapper[4858]: I0202 17:56:13.015194 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\") " pod="openstack/tempest-tests-tempest" Feb 02 17:56:13 crc kubenswrapper[4858]: I0202 17:56:13.015273 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/tempest-tests-tempest" Feb 02 17:56:13 crc kubenswrapper[4858]: I0202 17:56:13.015369 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\") " pod="openstack/tempest-tests-tempest" Feb 02 17:56:13 crc kubenswrapper[4858]: I0202 17:56:13.015517 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\") " pod="openstack/tempest-tests-tempest" Feb 02 17:56:13 crc kubenswrapper[4858]: I0202 17:56:13.019030 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\") " pod="openstack/tempest-tests-tempest" Feb 02 17:56:13 crc kubenswrapper[4858]: I0202 17:56:13.019156 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\") " pod="openstack/tempest-tests-tempest" Feb 02 17:56:13 crc kubenswrapper[4858]: I0202 17:56:13.033181 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4q2n\" (UniqueName: \"kubernetes.io/projected/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-kube-api-access-h4q2n\") pod \"tempest-tests-tempest\" (UID: \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\") " pod="openstack/tempest-tests-tempest" Feb 02 17:56:13 crc kubenswrapper[4858]: I0202 17:56:13.044965 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\") " pod="openstack/tempest-tests-tempest" Feb 02 17:56:13 crc kubenswrapper[4858]: I0202 17:56:13.132017 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 02 17:56:13 crc kubenswrapper[4858]: I0202 17:56:13.588896 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 02 17:56:13 crc kubenswrapper[4858]: I0202 17:56:13.869197 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52","Type":"ContainerStarted","Data":"0b47451747e713f019d00adbd677ec869f17b7c71ae0c0712e0281c5aa7ed644"} Feb 02 17:56:14 crc kubenswrapper[4858]: I0202 17:56:14.400859 4858 scope.go:117] "RemoveContainer" containerID="14c0595ecfc392eae362e391092ca630b3ab65f45c68441d2d3c09ae407972df" Feb 02 17:56:14 crc kubenswrapper[4858]: E0202 17:56:14.401310 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:56:29 crc kubenswrapper[4858]: I0202 17:56:29.400561 4858 scope.go:117] "RemoveContainer" containerID="14c0595ecfc392eae362e391092ca630b3ab65f45c68441d2d3c09ae407972df" Feb 02 17:56:29 crc kubenswrapper[4858]: E0202 17:56:29.401434 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:56:41 crc kubenswrapper[4858]: I0202 17:56:41.401416 4858 scope.go:117] "RemoveContainer" containerID="14c0595ecfc392eae362e391092ca630b3ab65f45c68441d2d3c09ae407972df" Feb 02 17:56:41 crc kubenswrapper[4858]: E0202 17:56:41.402267 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:56:46 crc kubenswrapper[4858]: E0202 17:56:46.569442 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 02 17:56:46 crc kubenswrapper[4858]: E0202 17:56:46.570085 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h4q2n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 17:56:46 crc kubenswrapper[4858]: E0202 17:56:46.571278 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52" Feb 02 17:56:47 crc kubenswrapper[4858]: E0202 17:56:47.185764 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52" Feb 02 17:56:52 crc kubenswrapper[4858]: I0202 17:56:52.401480 4858 scope.go:117] "RemoveContainer" containerID="14c0595ecfc392eae362e391092ca630b3ab65f45c68441d2d3c09ae407972df" Feb 02 17:56:52 crc kubenswrapper[4858]: E0202 17:56:52.402285 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:57:00 crc kubenswrapper[4858]: I0202 17:57:00.309566 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52","Type":"ContainerStarted","Data":"e8d7e0c6469001c518d68df1f0960bde4554608fc3677dd9cca9ab5ebbbe9a46"} Feb 02 17:57:00 crc kubenswrapper[4858]: I0202 17:57:00.337085 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.045527511 podStartE2EDuration="49.336853306s" podCreationTimestamp="2026-02-02 17:56:11 +0000 UTC" firstStartedPulling="2026-02-02 17:56:13.598118684 +0000 UTC m=+2474.750533949" lastFinishedPulling="2026-02-02 17:56:58.889444479 +0000 UTC m=+2520.041859744" observedRunningTime="2026-02-02 17:57:00.32735143 +0000 UTC m=+2521.479766705" watchObservedRunningTime="2026-02-02 17:57:00.336853306 +0000 UTC m=+2521.489268571" Feb 02 17:57:06 crc kubenswrapper[4858]: I0202 17:57:06.400691 4858 scope.go:117] "RemoveContainer" containerID="14c0595ecfc392eae362e391092ca630b3ab65f45c68441d2d3c09ae407972df" Feb 02 17:57:06 crc kubenswrapper[4858]: E0202 17:57:06.401581 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:57:17 crc kubenswrapper[4858]: I0202 17:57:17.401010 4858 scope.go:117] "RemoveContainer" containerID="14c0595ecfc392eae362e391092ca630b3ab65f45c68441d2d3c09ae407972df" Feb 02 17:57:17 crc kubenswrapper[4858]: E0202 17:57:17.401823 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:57:30 crc kubenswrapper[4858]: I0202 17:57:30.407698 4858 scope.go:117] "RemoveContainer" containerID="14c0595ecfc392eae362e391092ca630b3ab65f45c68441d2d3c09ae407972df" Feb 02 17:57:30 crc kubenswrapper[4858]: E0202 17:57:30.409085 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:57:42 crc kubenswrapper[4858]: I0202 17:57:42.400370 4858 scope.go:117] "RemoveContainer" containerID="14c0595ecfc392eae362e391092ca630b3ab65f45c68441d2d3c09ae407972df" Feb 02 17:57:42 crc kubenswrapper[4858]: E0202 17:57:42.401228 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:57:53 crc kubenswrapper[4858]: I0202 17:57:53.401712 4858 scope.go:117] "RemoveContainer" containerID="14c0595ecfc392eae362e391092ca630b3ab65f45c68441d2d3c09ae407972df" Feb 02 17:57:53 crc kubenswrapper[4858]: E0202 17:57:53.402532 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:58:04 crc kubenswrapper[4858]: I0202 17:58:04.400821 4858 scope.go:117] "RemoveContainer" containerID="14c0595ecfc392eae362e391092ca630b3ab65f45c68441d2d3c09ae407972df" Feb 02 17:58:04 crc kubenswrapper[4858]: E0202 17:58:04.401703 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:58:18 crc kubenswrapper[4858]: I0202 17:58:18.400990 4858 scope.go:117] "RemoveContainer" containerID="14c0595ecfc392eae362e391092ca630b3ab65f45c68441d2d3c09ae407972df" Feb 02 17:58:18 crc kubenswrapper[4858]: E0202 17:58:18.402054 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:58:33 crc kubenswrapper[4858]: I0202 17:58:33.400959 4858 scope.go:117] "RemoveContainer" containerID="14c0595ecfc392eae362e391092ca630b3ab65f45c68441d2d3c09ae407972df" Feb 02 17:58:33 crc kubenswrapper[4858]: E0202 17:58:33.402004 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:58:44 crc kubenswrapper[4858]: I0202 17:58:44.404757 4858 scope.go:117] "RemoveContainer" containerID="14c0595ecfc392eae362e391092ca630b3ab65f45c68441d2d3c09ae407972df" Feb 02 17:58:44 crc kubenswrapper[4858]: E0202 17:58:44.406514 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:58:59 crc kubenswrapper[4858]: I0202 17:58:59.401845 4858 scope.go:117] "RemoveContainer" containerID="14c0595ecfc392eae362e391092ca630b3ab65f45c68441d2d3c09ae407972df" Feb 02 17:58:59 crc kubenswrapper[4858]: E0202 17:58:59.403293 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:59:13 crc kubenswrapper[4858]: I0202 17:59:13.401841 4858 scope.go:117] "RemoveContainer" containerID="14c0595ecfc392eae362e391092ca630b3ab65f45c68441d2d3c09ae407972df" Feb 02 17:59:13 crc kubenswrapper[4858]: E0202 17:59:13.403056 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:59:28 crc kubenswrapper[4858]: I0202 17:59:28.401435 4858 scope.go:117] "RemoveContainer" containerID="14c0595ecfc392eae362e391092ca630b3ab65f45c68441d2d3c09ae407972df" Feb 02 17:59:28 crc kubenswrapper[4858]: E0202 17:59:28.402425 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:59:40 crc kubenswrapper[4858]: I0202 17:59:40.409884 4858 scope.go:117] "RemoveContainer" containerID="14c0595ecfc392eae362e391092ca630b3ab65f45c68441d2d3c09ae407972df" Feb 02 17:59:40 crc kubenswrapper[4858]: E0202 17:59:40.411020 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:59:43 crc kubenswrapper[4858]: I0202 17:59:43.857241 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nbw57"] Feb 02 17:59:43 crc kubenswrapper[4858]: I0202 17:59:43.862329 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nbw57" Feb 02 17:59:43 crc kubenswrapper[4858]: I0202 17:59:43.875042 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nbw57"] Feb 02 17:59:43 crc kubenswrapper[4858]: I0202 17:59:43.945352 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a937886-50bb-4934-ae90-3a40449685e5-catalog-content\") pod \"redhat-marketplace-nbw57\" (UID: \"0a937886-50bb-4934-ae90-3a40449685e5\") " pod="openshift-marketplace/redhat-marketplace-nbw57" Feb 02 17:59:43 crc kubenswrapper[4858]: I0202 17:59:43.945443 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr7mk\" (UniqueName: \"kubernetes.io/projected/0a937886-50bb-4934-ae90-3a40449685e5-kube-api-access-sr7mk\") pod \"redhat-marketplace-nbw57\" (UID: \"0a937886-50bb-4934-ae90-3a40449685e5\") " pod="openshift-marketplace/redhat-marketplace-nbw57" Feb 02 17:59:43 crc kubenswrapper[4858]: I0202 17:59:43.945523 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a937886-50bb-4934-ae90-3a40449685e5-utilities\") pod \"redhat-marketplace-nbw57\" (UID: \"0a937886-50bb-4934-ae90-3a40449685e5\") " pod="openshift-marketplace/redhat-marketplace-nbw57" Feb 02 17:59:44 crc kubenswrapper[4858]: I0202 17:59:44.048330 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a937886-50bb-4934-ae90-3a40449685e5-catalog-content\") pod \"redhat-marketplace-nbw57\" (UID: \"0a937886-50bb-4934-ae90-3a40449685e5\") " pod="openshift-marketplace/redhat-marketplace-nbw57" Feb 02 17:59:44 crc kubenswrapper[4858]: I0202 17:59:44.048464 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr7mk\" (UniqueName: \"kubernetes.io/projected/0a937886-50bb-4934-ae90-3a40449685e5-kube-api-access-sr7mk\") pod \"redhat-marketplace-nbw57\" (UID: \"0a937886-50bb-4934-ae90-3a40449685e5\") " pod="openshift-marketplace/redhat-marketplace-nbw57" Feb 02 17:59:44 crc kubenswrapper[4858]: I0202 17:59:44.048568 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a937886-50bb-4934-ae90-3a40449685e5-utilities\") pod \"redhat-marketplace-nbw57\" (UID: \"0a937886-50bb-4934-ae90-3a40449685e5\") " pod="openshift-marketplace/redhat-marketplace-nbw57" Feb 02 17:59:44 crc kubenswrapper[4858]: I0202 17:59:44.049468 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a937886-50bb-4934-ae90-3a40449685e5-utilities\") pod \"redhat-marketplace-nbw57\" (UID: \"0a937886-50bb-4934-ae90-3a40449685e5\") " pod="openshift-marketplace/redhat-marketplace-nbw57" Feb 02 17:59:44 crc kubenswrapper[4858]: I0202 17:59:44.049809 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a937886-50bb-4934-ae90-3a40449685e5-catalog-content\") pod \"redhat-marketplace-nbw57\" (UID: \"0a937886-50bb-4934-ae90-3a40449685e5\") " pod="openshift-marketplace/redhat-marketplace-nbw57" Feb 02 17:59:44 crc kubenswrapper[4858]: I0202 17:59:44.086026 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr7mk\" (UniqueName: \"kubernetes.io/projected/0a937886-50bb-4934-ae90-3a40449685e5-kube-api-access-sr7mk\") pod \"redhat-marketplace-nbw57\" (UID: \"0a937886-50bb-4934-ae90-3a40449685e5\") " pod="openshift-marketplace/redhat-marketplace-nbw57" Feb 02 17:59:44 crc kubenswrapper[4858]: I0202 17:59:44.192675 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nbw57" Feb 02 17:59:44 crc kubenswrapper[4858]: I0202 17:59:44.799183 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nbw57"] Feb 02 17:59:44 crc kubenswrapper[4858]: I0202 17:59:44.938795 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nbw57" event={"ID":"0a937886-50bb-4934-ae90-3a40449685e5","Type":"ContainerStarted","Data":"f534fb9c7eeb3c4c543471e20a85fbf347188a07890fd85186597e1902a954f8"} Feb 02 17:59:45 crc kubenswrapper[4858]: I0202 17:59:45.952402 4858 generic.go:334] "Generic (PLEG): container finished" podID="0a937886-50bb-4934-ae90-3a40449685e5" containerID="50d52a9e7bafcbd478c2efc1d278ef67d0830d7a0ce2d802541fc755002ee940" exitCode=0 Feb 02 17:59:45 crc kubenswrapper[4858]: I0202 17:59:45.952590 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nbw57" event={"ID":"0a937886-50bb-4934-ae90-3a40449685e5","Type":"ContainerDied","Data":"50d52a9e7bafcbd478c2efc1d278ef67d0830d7a0ce2d802541fc755002ee940"} Feb 02 17:59:45 crc kubenswrapper[4858]: I0202 17:59:45.954640 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 17:59:46 crc kubenswrapper[4858]: I0202 17:59:46.968576 4858 generic.go:334] "Generic (PLEG): container finished" podID="0a937886-50bb-4934-ae90-3a40449685e5" containerID="5982063f2f6f135fe569b1d686e2310bcccfc3a9a86a69e262176050e9f86a6d" exitCode=0 Feb 02 17:59:46 crc kubenswrapper[4858]: E0202 17:59:46.968729 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a937886_50bb_4934_ae90_3a40449685e5.slice/crio-conmon-5982063f2f6f135fe569b1d686e2310bcccfc3a9a86a69e262176050e9f86a6d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a937886_50bb_4934_ae90_3a40449685e5.slice/crio-5982063f2f6f135fe569b1d686e2310bcccfc3a9a86a69e262176050e9f86a6d.scope\": RecentStats: unable to find data in memory cache]" Feb 02 17:59:46 crc kubenswrapper[4858]: I0202 17:59:46.968786 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nbw57" event={"ID":"0a937886-50bb-4934-ae90-3a40449685e5","Type":"ContainerDied","Data":"5982063f2f6f135fe569b1d686e2310bcccfc3a9a86a69e262176050e9f86a6d"} Feb 02 17:59:47 crc kubenswrapper[4858]: I0202 17:59:47.980030 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nbw57" event={"ID":"0a937886-50bb-4934-ae90-3a40449685e5","Type":"ContainerStarted","Data":"c328a1b54fa7524e96265d28e68efbe46a4c5aef9545353c8d6889ceb679d04b"} Feb 02 17:59:48 crc kubenswrapper[4858]: I0202 17:59:48.013045 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nbw57" podStartSLOduration=3.358264437 podStartE2EDuration="5.013020962s" podCreationTimestamp="2026-02-02 17:59:43 +0000 UTC" firstStartedPulling="2026-02-02 17:59:45.95439864 +0000 UTC m=+2687.106813905" lastFinishedPulling="2026-02-02 17:59:47.609155165 +0000 UTC m=+2688.761570430" observedRunningTime="2026-02-02 17:59:48.005956596 +0000 UTC m=+2689.158371861" watchObservedRunningTime="2026-02-02 17:59:48.013020962 +0000 UTC m=+2689.165436227" Feb 02 17:59:54 crc kubenswrapper[4858]: I0202 17:59:54.193692 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nbw57" Feb 02 17:59:54 crc kubenswrapper[4858]: I0202 17:59:54.194414 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nbw57" Feb 02 17:59:54 crc kubenswrapper[4858]: I0202 17:59:54.254809 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nbw57" Feb 02 17:59:54 crc kubenswrapper[4858]: I0202 17:59:54.400657 4858 scope.go:117] "RemoveContainer" containerID="14c0595ecfc392eae362e391092ca630b3ab65f45c68441d2d3c09ae407972df" Feb 02 17:59:54 crc kubenswrapper[4858]: E0202 17:59:54.400939 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 17:59:55 crc kubenswrapper[4858]: I0202 17:59:55.114404 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nbw57" Feb 02 17:59:55 crc kubenswrapper[4858]: I0202 17:59:55.180752 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nbw57"] Feb 02 17:59:57 crc kubenswrapper[4858]: I0202 17:59:57.071720 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nbw57" podUID="0a937886-50bb-4934-ae90-3a40449685e5" containerName="registry-server" containerID="cri-o://c328a1b54fa7524e96265d28e68efbe46a4c5aef9545353c8d6889ceb679d04b" gracePeriod=2 Feb 02 17:59:57 crc kubenswrapper[4858]: I0202 17:59:57.594450 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nbw57" Feb 02 17:59:57 crc kubenswrapper[4858]: I0202 17:59:57.659080 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a937886-50bb-4934-ae90-3a40449685e5-utilities\") pod \"0a937886-50bb-4934-ae90-3a40449685e5\" (UID: \"0a937886-50bb-4934-ae90-3a40449685e5\") " Feb 02 17:59:57 crc kubenswrapper[4858]: I0202 17:59:57.659500 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sr7mk\" (UniqueName: \"kubernetes.io/projected/0a937886-50bb-4934-ae90-3a40449685e5-kube-api-access-sr7mk\") pod \"0a937886-50bb-4934-ae90-3a40449685e5\" (UID: \"0a937886-50bb-4934-ae90-3a40449685e5\") " Feb 02 17:59:57 crc kubenswrapper[4858]: I0202 17:59:57.659705 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a937886-50bb-4934-ae90-3a40449685e5-catalog-content\") pod \"0a937886-50bb-4934-ae90-3a40449685e5\" (UID: \"0a937886-50bb-4934-ae90-3a40449685e5\") " Feb 02 17:59:57 crc kubenswrapper[4858]: I0202 17:59:57.661100 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a937886-50bb-4934-ae90-3a40449685e5-utilities" (OuterVolumeSpecName: "utilities") pod "0a937886-50bb-4934-ae90-3a40449685e5" (UID: "0a937886-50bb-4934-ae90-3a40449685e5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:59:57 crc kubenswrapper[4858]: I0202 17:59:57.665878 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a937886-50bb-4934-ae90-3a40449685e5-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 17:59:57 crc kubenswrapper[4858]: I0202 17:59:57.681239 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a937886-50bb-4934-ae90-3a40449685e5-kube-api-access-sr7mk" (OuterVolumeSpecName: "kube-api-access-sr7mk") pod "0a937886-50bb-4934-ae90-3a40449685e5" (UID: "0a937886-50bb-4934-ae90-3a40449685e5"). InnerVolumeSpecName "kube-api-access-sr7mk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 17:59:57 crc kubenswrapper[4858]: I0202 17:59:57.690795 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a937886-50bb-4934-ae90-3a40449685e5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0a937886-50bb-4934-ae90-3a40449685e5" (UID: "0a937886-50bb-4934-ae90-3a40449685e5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 17:59:57 crc kubenswrapper[4858]: I0202 17:59:57.768331 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sr7mk\" (UniqueName: \"kubernetes.io/projected/0a937886-50bb-4934-ae90-3a40449685e5-kube-api-access-sr7mk\") on node \"crc\" DevicePath \"\"" Feb 02 17:59:57 crc kubenswrapper[4858]: I0202 17:59:57.768373 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a937886-50bb-4934-ae90-3a40449685e5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 17:59:58 crc kubenswrapper[4858]: I0202 17:59:58.092623 4858 generic.go:334] "Generic (PLEG): container finished" podID="0a937886-50bb-4934-ae90-3a40449685e5" containerID="c328a1b54fa7524e96265d28e68efbe46a4c5aef9545353c8d6889ceb679d04b" exitCode=0 Feb 02 17:59:58 crc kubenswrapper[4858]: I0202 17:59:58.092672 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nbw57" event={"ID":"0a937886-50bb-4934-ae90-3a40449685e5","Type":"ContainerDied","Data":"c328a1b54fa7524e96265d28e68efbe46a4c5aef9545353c8d6889ceb679d04b"} Feb 02 17:59:58 crc kubenswrapper[4858]: I0202 17:59:58.092701 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nbw57" event={"ID":"0a937886-50bb-4934-ae90-3a40449685e5","Type":"ContainerDied","Data":"f534fb9c7eeb3c4c543471e20a85fbf347188a07890fd85186597e1902a954f8"} Feb 02 17:59:58 crc kubenswrapper[4858]: I0202 17:59:58.092717 4858 scope.go:117] "RemoveContainer" containerID="c328a1b54fa7524e96265d28e68efbe46a4c5aef9545353c8d6889ceb679d04b" Feb 02 17:59:58 crc kubenswrapper[4858]: I0202 17:59:58.092741 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nbw57" Feb 02 17:59:58 crc kubenswrapper[4858]: I0202 17:59:58.160251 4858 scope.go:117] "RemoveContainer" containerID="5982063f2f6f135fe569b1d686e2310bcccfc3a9a86a69e262176050e9f86a6d" Feb 02 17:59:58 crc kubenswrapper[4858]: I0202 17:59:58.163419 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nbw57"] Feb 02 17:59:58 crc kubenswrapper[4858]: I0202 17:59:58.208004 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nbw57"] Feb 02 17:59:58 crc kubenswrapper[4858]: I0202 17:59:58.226014 4858 scope.go:117] "RemoveContainer" containerID="50d52a9e7bafcbd478c2efc1d278ef67d0830d7a0ce2d802541fc755002ee940" Feb 02 17:59:58 crc kubenswrapper[4858]: I0202 17:59:58.268442 4858 scope.go:117] "RemoveContainer" containerID="c328a1b54fa7524e96265d28e68efbe46a4c5aef9545353c8d6889ceb679d04b" Feb 02 17:59:58 crc kubenswrapper[4858]: E0202 17:59:58.269583 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c328a1b54fa7524e96265d28e68efbe46a4c5aef9545353c8d6889ceb679d04b\": container with ID starting with c328a1b54fa7524e96265d28e68efbe46a4c5aef9545353c8d6889ceb679d04b not found: ID does not exist" containerID="c328a1b54fa7524e96265d28e68efbe46a4c5aef9545353c8d6889ceb679d04b" Feb 02 17:59:58 crc kubenswrapper[4858]: I0202 17:59:58.269634 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c328a1b54fa7524e96265d28e68efbe46a4c5aef9545353c8d6889ceb679d04b"} err="failed to get container status \"c328a1b54fa7524e96265d28e68efbe46a4c5aef9545353c8d6889ceb679d04b\": rpc error: code = NotFound desc = could not find container \"c328a1b54fa7524e96265d28e68efbe46a4c5aef9545353c8d6889ceb679d04b\": container with ID starting with c328a1b54fa7524e96265d28e68efbe46a4c5aef9545353c8d6889ceb679d04b not found: ID does not exist" Feb 02 17:59:58 crc kubenswrapper[4858]: I0202 17:59:58.269659 4858 scope.go:117] "RemoveContainer" containerID="5982063f2f6f135fe569b1d686e2310bcccfc3a9a86a69e262176050e9f86a6d" Feb 02 17:59:58 crc kubenswrapper[4858]: E0202 17:59:58.270526 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5982063f2f6f135fe569b1d686e2310bcccfc3a9a86a69e262176050e9f86a6d\": container with ID starting with 5982063f2f6f135fe569b1d686e2310bcccfc3a9a86a69e262176050e9f86a6d not found: ID does not exist" containerID="5982063f2f6f135fe569b1d686e2310bcccfc3a9a86a69e262176050e9f86a6d" Feb 02 17:59:58 crc kubenswrapper[4858]: I0202 17:59:58.270582 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5982063f2f6f135fe569b1d686e2310bcccfc3a9a86a69e262176050e9f86a6d"} err="failed to get container status \"5982063f2f6f135fe569b1d686e2310bcccfc3a9a86a69e262176050e9f86a6d\": rpc error: code = NotFound desc = could not find container \"5982063f2f6f135fe569b1d686e2310bcccfc3a9a86a69e262176050e9f86a6d\": container with ID starting with 5982063f2f6f135fe569b1d686e2310bcccfc3a9a86a69e262176050e9f86a6d not found: ID does not exist" Feb 02 17:59:58 crc kubenswrapper[4858]: I0202 17:59:58.270618 4858 scope.go:117] "RemoveContainer" containerID="50d52a9e7bafcbd478c2efc1d278ef67d0830d7a0ce2d802541fc755002ee940" Feb 02 17:59:58 crc kubenswrapper[4858]: E0202 17:59:58.270999 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50d52a9e7bafcbd478c2efc1d278ef67d0830d7a0ce2d802541fc755002ee940\": container with ID starting with 50d52a9e7bafcbd478c2efc1d278ef67d0830d7a0ce2d802541fc755002ee940 not found: ID does not exist" containerID="50d52a9e7bafcbd478c2efc1d278ef67d0830d7a0ce2d802541fc755002ee940" Feb 02 17:59:58 crc kubenswrapper[4858]: I0202 17:59:58.271022 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50d52a9e7bafcbd478c2efc1d278ef67d0830d7a0ce2d802541fc755002ee940"} err="failed to get container status \"50d52a9e7bafcbd478c2efc1d278ef67d0830d7a0ce2d802541fc755002ee940\": rpc error: code = NotFound desc = could not find container \"50d52a9e7bafcbd478c2efc1d278ef67d0830d7a0ce2d802541fc755002ee940\": container with ID starting with 50d52a9e7bafcbd478c2efc1d278ef67d0830d7a0ce2d802541fc755002ee940 not found: ID does not exist" Feb 02 17:59:58 crc kubenswrapper[4858]: I0202 17:59:58.413991 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a937886-50bb-4934-ae90-3a40449685e5" path="/var/lib/kubelet/pods/0a937886-50bb-4934-ae90-3a40449685e5/volumes" Feb 02 18:00:00 crc kubenswrapper[4858]: I0202 18:00:00.159216 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500920-lnvg5"] Feb 02 18:00:00 crc kubenswrapper[4858]: E0202 18:00:00.160773 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a937886-50bb-4934-ae90-3a40449685e5" containerName="extract-utilities" Feb 02 18:00:00 crc kubenswrapper[4858]: I0202 18:00:00.160801 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a937886-50bb-4934-ae90-3a40449685e5" containerName="extract-utilities" Feb 02 18:00:00 crc kubenswrapper[4858]: E0202 18:00:00.160828 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a937886-50bb-4934-ae90-3a40449685e5" containerName="extract-content" Feb 02 18:00:00 crc kubenswrapper[4858]: I0202 18:00:00.160842 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a937886-50bb-4934-ae90-3a40449685e5" containerName="extract-content" Feb 02 18:00:00 crc kubenswrapper[4858]: E0202 18:00:00.160854 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a937886-50bb-4934-ae90-3a40449685e5" containerName="registry-server" Feb 02 18:00:00 crc kubenswrapper[4858]: I0202 18:00:00.160861 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a937886-50bb-4934-ae90-3a40449685e5" containerName="registry-server" Feb 02 18:00:00 crc kubenswrapper[4858]: I0202 18:00:00.161144 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a937886-50bb-4934-ae90-3a40449685e5" containerName="registry-server" Feb 02 18:00:00 crc kubenswrapper[4858]: I0202 18:00:00.162223 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500920-lnvg5" Feb 02 18:00:00 crc kubenswrapper[4858]: I0202 18:00:00.165276 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 18:00:00 crc kubenswrapper[4858]: I0202 18:00:00.165530 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 18:00:00 crc kubenswrapper[4858]: I0202 18:00:00.172733 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500920-lnvg5"] Feb 02 18:00:00 crc kubenswrapper[4858]: I0202 18:00:00.219205 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a858b085-33a3-4548-8608-4976bf1ccb2a-secret-volume\") pod \"collect-profiles-29500920-lnvg5\" (UID: \"a858b085-33a3-4548-8608-4976bf1ccb2a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500920-lnvg5" Feb 02 18:00:00 crc kubenswrapper[4858]: I0202 18:00:00.219281 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a858b085-33a3-4548-8608-4976bf1ccb2a-config-volume\") pod \"collect-profiles-29500920-lnvg5\" (UID: \"a858b085-33a3-4548-8608-4976bf1ccb2a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500920-lnvg5" Feb 02 18:00:00 crc kubenswrapper[4858]: I0202 18:00:00.219411 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26khm\" (UniqueName: \"kubernetes.io/projected/a858b085-33a3-4548-8608-4976bf1ccb2a-kube-api-access-26khm\") pod \"collect-profiles-29500920-lnvg5\" (UID: \"a858b085-33a3-4548-8608-4976bf1ccb2a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500920-lnvg5" Feb 02 18:00:00 crc kubenswrapper[4858]: I0202 18:00:00.321387 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a858b085-33a3-4548-8608-4976bf1ccb2a-config-volume\") pod \"collect-profiles-29500920-lnvg5\" (UID: \"a858b085-33a3-4548-8608-4976bf1ccb2a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500920-lnvg5" Feb 02 18:00:00 crc kubenswrapper[4858]: I0202 18:00:00.321873 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26khm\" (UniqueName: \"kubernetes.io/projected/a858b085-33a3-4548-8608-4976bf1ccb2a-kube-api-access-26khm\") pod \"collect-profiles-29500920-lnvg5\" (UID: \"a858b085-33a3-4548-8608-4976bf1ccb2a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500920-lnvg5" Feb 02 18:00:00 crc kubenswrapper[4858]: I0202 18:00:00.321943 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a858b085-33a3-4548-8608-4976bf1ccb2a-secret-volume\") pod \"collect-profiles-29500920-lnvg5\" (UID: \"a858b085-33a3-4548-8608-4976bf1ccb2a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500920-lnvg5" Feb 02 18:00:00 crc kubenswrapper[4858]: I0202 18:00:00.322444 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a858b085-33a3-4548-8608-4976bf1ccb2a-config-volume\") pod \"collect-profiles-29500920-lnvg5\" (UID: \"a858b085-33a3-4548-8608-4976bf1ccb2a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500920-lnvg5" Feb 02 18:00:00 crc kubenswrapper[4858]: I0202 18:00:00.342083 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a858b085-33a3-4548-8608-4976bf1ccb2a-secret-volume\") pod \"collect-profiles-29500920-lnvg5\" (UID: \"a858b085-33a3-4548-8608-4976bf1ccb2a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500920-lnvg5" Feb 02 18:00:00 crc kubenswrapper[4858]: I0202 18:00:00.344523 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26khm\" (UniqueName: \"kubernetes.io/projected/a858b085-33a3-4548-8608-4976bf1ccb2a-kube-api-access-26khm\") pod \"collect-profiles-29500920-lnvg5\" (UID: \"a858b085-33a3-4548-8608-4976bf1ccb2a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500920-lnvg5" Feb 02 18:00:00 crc kubenswrapper[4858]: I0202 18:00:00.508541 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 18:00:00 crc kubenswrapper[4858]: I0202 18:00:00.516095 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500920-lnvg5" Feb 02 18:00:01 crc kubenswrapper[4858]: I0202 18:00:01.058748 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500920-lnvg5"] Feb 02 18:00:01 crc kubenswrapper[4858]: I0202 18:00:01.134301 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500920-lnvg5" event={"ID":"a858b085-33a3-4548-8608-4976bf1ccb2a","Type":"ContainerStarted","Data":"aa6f6f0d2d12bf5f6ee05fcd30d731538aec274c3cd84bd9178eb555db3d37c8"} Feb 02 18:00:02 crc kubenswrapper[4858]: I0202 18:00:02.145917 4858 generic.go:334] "Generic (PLEG): container finished" podID="a858b085-33a3-4548-8608-4976bf1ccb2a" containerID="ea0dc2610361a2dfb9cc1e3e67efd5c034b51ff174a2aebedcf022aeef6f4839" exitCode=0 Feb 02 18:00:02 crc kubenswrapper[4858]: I0202 18:00:02.145989 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500920-lnvg5" event={"ID":"a858b085-33a3-4548-8608-4976bf1ccb2a","Type":"ContainerDied","Data":"ea0dc2610361a2dfb9cc1e3e67efd5c034b51ff174a2aebedcf022aeef6f4839"} Feb 02 18:00:03 crc kubenswrapper[4858]: I0202 18:00:03.579965 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500920-lnvg5" Feb 02 18:00:03 crc kubenswrapper[4858]: I0202 18:00:03.703451 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26khm\" (UniqueName: \"kubernetes.io/projected/a858b085-33a3-4548-8608-4976bf1ccb2a-kube-api-access-26khm\") pod \"a858b085-33a3-4548-8608-4976bf1ccb2a\" (UID: \"a858b085-33a3-4548-8608-4976bf1ccb2a\") " Feb 02 18:00:03 crc kubenswrapper[4858]: I0202 18:00:03.703596 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a858b085-33a3-4548-8608-4976bf1ccb2a-secret-volume\") pod \"a858b085-33a3-4548-8608-4976bf1ccb2a\" (UID: \"a858b085-33a3-4548-8608-4976bf1ccb2a\") " Feb 02 18:00:03 crc kubenswrapper[4858]: I0202 18:00:03.703883 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a858b085-33a3-4548-8608-4976bf1ccb2a-config-volume\") pod \"a858b085-33a3-4548-8608-4976bf1ccb2a\" (UID: \"a858b085-33a3-4548-8608-4976bf1ccb2a\") " Feb 02 18:00:03 crc kubenswrapper[4858]: I0202 18:00:03.704697 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a858b085-33a3-4548-8608-4976bf1ccb2a-config-volume" (OuterVolumeSpecName: "config-volume") pod "a858b085-33a3-4548-8608-4976bf1ccb2a" (UID: "a858b085-33a3-4548-8608-4976bf1ccb2a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 18:00:03 crc kubenswrapper[4858]: I0202 18:00:03.705171 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a858b085-33a3-4548-8608-4976bf1ccb2a-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 18:00:03 crc kubenswrapper[4858]: I0202 18:00:03.709583 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a858b085-33a3-4548-8608-4976bf1ccb2a-kube-api-access-26khm" (OuterVolumeSpecName: "kube-api-access-26khm") pod "a858b085-33a3-4548-8608-4976bf1ccb2a" (UID: "a858b085-33a3-4548-8608-4976bf1ccb2a"). InnerVolumeSpecName "kube-api-access-26khm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 18:00:03 crc kubenswrapper[4858]: I0202 18:00:03.709740 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a858b085-33a3-4548-8608-4976bf1ccb2a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a858b085-33a3-4548-8608-4976bf1ccb2a" (UID: "a858b085-33a3-4548-8608-4976bf1ccb2a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 18:00:03 crc kubenswrapper[4858]: I0202 18:00:03.807429 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26khm\" (UniqueName: \"kubernetes.io/projected/a858b085-33a3-4548-8608-4976bf1ccb2a-kube-api-access-26khm\") on node \"crc\" DevicePath \"\"" Feb 02 18:00:03 crc kubenswrapper[4858]: I0202 18:00:03.807485 4858 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a858b085-33a3-4548-8608-4976bf1ccb2a-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 18:00:04 crc kubenswrapper[4858]: I0202 18:00:04.164220 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500920-lnvg5" event={"ID":"a858b085-33a3-4548-8608-4976bf1ccb2a","Type":"ContainerDied","Data":"aa6f6f0d2d12bf5f6ee05fcd30d731538aec274c3cd84bd9178eb555db3d37c8"} Feb 02 18:00:04 crc kubenswrapper[4858]: I0202 18:00:04.164327 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa6f6f0d2d12bf5f6ee05fcd30d731538aec274c3cd84bd9178eb555db3d37c8" Feb 02 18:00:04 crc kubenswrapper[4858]: I0202 18:00:04.164365 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500920-lnvg5" Feb 02 18:00:04 crc kubenswrapper[4858]: I0202 18:00:04.663501 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500875-tdl29"] Feb 02 18:00:04 crc kubenswrapper[4858]: I0202 18:00:04.671810 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500875-tdl29"] Feb 02 18:00:06 crc kubenswrapper[4858]: I0202 18:00:06.418576 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3c72db6-4315-4210-9cfe-3c27b18e4abd" path="/var/lib/kubelet/pods/f3c72db6-4315-4210-9cfe-3c27b18e4abd/volumes" Feb 02 18:00:09 crc kubenswrapper[4858]: I0202 18:00:09.401614 4858 scope.go:117] "RemoveContainer" containerID="14c0595ecfc392eae362e391092ca630b3ab65f45c68441d2d3c09ae407972df" Feb 02 18:00:10 crc kubenswrapper[4858]: I0202 18:00:10.242544 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" event={"ID":"d03a4872-ca6a-4233-bdbf-b31f7890dc3e","Type":"ContainerStarted","Data":"c138fbcf7f05ffa20c7f25a2579c78076541e9f160625248f5a538afb5d97df4"} Feb 02 18:00:11 crc kubenswrapper[4858]: I0202 18:00:11.510377 4858 scope.go:117] "RemoveContainer" containerID="2d2736426dc9b8cf377bc45320176e344e9a75b7b04efaf3a097fdd13f77bb21" Feb 02 18:01:00 crc kubenswrapper[4858]: I0202 18:01:00.149716 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29500921-5cpwt"] Feb 02 18:01:00 crc kubenswrapper[4858]: E0202 18:01:00.150626 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a858b085-33a3-4548-8608-4976bf1ccb2a" containerName="collect-profiles" Feb 02 18:01:00 crc kubenswrapper[4858]: I0202 18:01:00.150640 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a858b085-33a3-4548-8608-4976bf1ccb2a" containerName="collect-profiles" Feb 02 18:01:00 crc kubenswrapper[4858]: I0202 18:01:00.150865 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a858b085-33a3-4548-8608-4976bf1ccb2a" containerName="collect-profiles" Feb 02 18:01:00 crc kubenswrapper[4858]: I0202 18:01:00.151587 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500921-5cpwt" Feb 02 18:01:00 crc kubenswrapper[4858]: I0202 18:01:00.166592 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29500921-5cpwt"] Feb 02 18:01:00 crc kubenswrapper[4858]: I0202 18:01:00.319133 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9gl7\" (UniqueName: \"kubernetes.io/projected/aedf0b15-0748-4ad7-afce-e421d046a585-kube-api-access-n9gl7\") pod \"keystone-cron-29500921-5cpwt\" (UID: \"aedf0b15-0748-4ad7-afce-e421d046a585\") " pod="openstack/keystone-cron-29500921-5cpwt" Feb 02 18:01:00 crc kubenswrapper[4858]: I0202 18:01:00.319228 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aedf0b15-0748-4ad7-afce-e421d046a585-combined-ca-bundle\") pod \"keystone-cron-29500921-5cpwt\" (UID: \"aedf0b15-0748-4ad7-afce-e421d046a585\") " pod="openstack/keystone-cron-29500921-5cpwt" Feb 02 18:01:00 crc kubenswrapper[4858]: I0202 18:01:00.319311 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aedf0b15-0748-4ad7-afce-e421d046a585-fernet-keys\") pod \"keystone-cron-29500921-5cpwt\" (UID: \"aedf0b15-0748-4ad7-afce-e421d046a585\") " pod="openstack/keystone-cron-29500921-5cpwt" Feb 02 18:01:00 crc kubenswrapper[4858]: I0202 18:01:00.319485 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aedf0b15-0748-4ad7-afce-e421d046a585-config-data\") pod \"keystone-cron-29500921-5cpwt\" (UID: \"aedf0b15-0748-4ad7-afce-e421d046a585\") " pod="openstack/keystone-cron-29500921-5cpwt" Feb 02 18:01:00 crc kubenswrapper[4858]: I0202 18:01:00.420822 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9gl7\" (UniqueName: \"kubernetes.io/projected/aedf0b15-0748-4ad7-afce-e421d046a585-kube-api-access-n9gl7\") pod \"keystone-cron-29500921-5cpwt\" (UID: \"aedf0b15-0748-4ad7-afce-e421d046a585\") " pod="openstack/keystone-cron-29500921-5cpwt" Feb 02 18:01:00 crc kubenswrapper[4858]: I0202 18:01:00.420911 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aedf0b15-0748-4ad7-afce-e421d046a585-combined-ca-bundle\") pod \"keystone-cron-29500921-5cpwt\" (UID: \"aedf0b15-0748-4ad7-afce-e421d046a585\") " pod="openstack/keystone-cron-29500921-5cpwt" Feb 02 18:01:00 crc kubenswrapper[4858]: I0202 18:01:00.420988 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aedf0b15-0748-4ad7-afce-e421d046a585-fernet-keys\") pod \"keystone-cron-29500921-5cpwt\" (UID: \"aedf0b15-0748-4ad7-afce-e421d046a585\") " pod="openstack/keystone-cron-29500921-5cpwt" Feb 02 18:01:00 crc kubenswrapper[4858]: I0202 18:01:00.421104 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aedf0b15-0748-4ad7-afce-e421d046a585-config-data\") pod \"keystone-cron-29500921-5cpwt\" (UID: \"aedf0b15-0748-4ad7-afce-e421d046a585\") " pod="openstack/keystone-cron-29500921-5cpwt" Feb 02 18:01:00 crc kubenswrapper[4858]: I0202 18:01:00.426691 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aedf0b15-0748-4ad7-afce-e421d046a585-fernet-keys\") pod \"keystone-cron-29500921-5cpwt\" (UID: \"aedf0b15-0748-4ad7-afce-e421d046a585\") " pod="openstack/keystone-cron-29500921-5cpwt" Feb 02 18:01:00 crc kubenswrapper[4858]: I0202 18:01:00.427198 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aedf0b15-0748-4ad7-afce-e421d046a585-combined-ca-bundle\") pod \"keystone-cron-29500921-5cpwt\" (UID: \"aedf0b15-0748-4ad7-afce-e421d046a585\") " pod="openstack/keystone-cron-29500921-5cpwt" Feb 02 18:01:00 crc kubenswrapper[4858]: I0202 18:01:00.436182 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aedf0b15-0748-4ad7-afce-e421d046a585-config-data\") pod \"keystone-cron-29500921-5cpwt\" (UID: \"aedf0b15-0748-4ad7-afce-e421d046a585\") " pod="openstack/keystone-cron-29500921-5cpwt" Feb 02 18:01:00 crc kubenswrapper[4858]: I0202 18:01:00.437967 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9gl7\" (UniqueName: \"kubernetes.io/projected/aedf0b15-0748-4ad7-afce-e421d046a585-kube-api-access-n9gl7\") pod \"keystone-cron-29500921-5cpwt\" (UID: \"aedf0b15-0748-4ad7-afce-e421d046a585\") " pod="openstack/keystone-cron-29500921-5cpwt" Feb 02 18:01:00 crc kubenswrapper[4858]: I0202 18:01:00.470256 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500921-5cpwt" Feb 02 18:01:00 crc kubenswrapper[4858]: I0202 18:01:00.905342 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29500921-5cpwt"] Feb 02 18:01:01 crc kubenswrapper[4858]: I0202 18:01:01.759163 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500921-5cpwt" event={"ID":"aedf0b15-0748-4ad7-afce-e421d046a585","Type":"ContainerStarted","Data":"931c4a1f853eafb58f714b93b9e77da8a56d1b2b088b7a37b34a0a8231c3841c"} Feb 02 18:01:01 crc kubenswrapper[4858]: I0202 18:01:01.759504 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500921-5cpwt" event={"ID":"aedf0b15-0748-4ad7-afce-e421d046a585","Type":"ContainerStarted","Data":"7f1bf2c59f36fa20be4b58f74fe73bc49194e7fb0525abbfb06efa4b5f0026c4"} Feb 02 18:01:01 crc kubenswrapper[4858]: I0202 18:01:01.783951 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29500921-5cpwt" podStartSLOduration=1.7839295210000001 podStartE2EDuration="1.783929521s" podCreationTimestamp="2026-02-02 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 18:01:01.782038217 +0000 UTC m=+2762.934453492" watchObservedRunningTime="2026-02-02 18:01:01.783929521 +0000 UTC m=+2762.936344786" Feb 02 18:01:03 crc kubenswrapper[4858]: I0202 18:01:03.781244 4858 generic.go:334] "Generic (PLEG): container finished" podID="aedf0b15-0748-4ad7-afce-e421d046a585" containerID="931c4a1f853eafb58f714b93b9e77da8a56d1b2b088b7a37b34a0a8231c3841c" exitCode=0 Feb 02 18:01:03 crc kubenswrapper[4858]: I0202 18:01:03.781348 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500921-5cpwt" event={"ID":"aedf0b15-0748-4ad7-afce-e421d046a585","Type":"ContainerDied","Data":"931c4a1f853eafb58f714b93b9e77da8a56d1b2b088b7a37b34a0a8231c3841c"} Feb 02 18:01:05 crc kubenswrapper[4858]: I0202 18:01:05.254467 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500921-5cpwt" Feb 02 18:01:05 crc kubenswrapper[4858]: I0202 18:01:05.427643 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aedf0b15-0748-4ad7-afce-e421d046a585-fernet-keys\") pod \"aedf0b15-0748-4ad7-afce-e421d046a585\" (UID: \"aedf0b15-0748-4ad7-afce-e421d046a585\") " Feb 02 18:01:05 crc kubenswrapper[4858]: I0202 18:01:05.428490 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aedf0b15-0748-4ad7-afce-e421d046a585-config-data\") pod \"aedf0b15-0748-4ad7-afce-e421d046a585\" (UID: \"aedf0b15-0748-4ad7-afce-e421d046a585\") " Feb 02 18:01:05 crc kubenswrapper[4858]: I0202 18:01:05.428565 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9gl7\" (UniqueName: \"kubernetes.io/projected/aedf0b15-0748-4ad7-afce-e421d046a585-kube-api-access-n9gl7\") pod \"aedf0b15-0748-4ad7-afce-e421d046a585\" (UID: \"aedf0b15-0748-4ad7-afce-e421d046a585\") " Feb 02 18:01:05 crc kubenswrapper[4858]: I0202 18:01:05.428875 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aedf0b15-0748-4ad7-afce-e421d046a585-combined-ca-bundle\") pod \"aedf0b15-0748-4ad7-afce-e421d046a585\" (UID: \"aedf0b15-0748-4ad7-afce-e421d046a585\") " Feb 02 18:01:05 crc kubenswrapper[4858]: I0202 18:01:05.436463 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aedf0b15-0748-4ad7-afce-e421d046a585-kube-api-access-n9gl7" (OuterVolumeSpecName: "kube-api-access-n9gl7") pod "aedf0b15-0748-4ad7-afce-e421d046a585" (UID: "aedf0b15-0748-4ad7-afce-e421d046a585"). InnerVolumeSpecName "kube-api-access-n9gl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 18:01:05 crc kubenswrapper[4858]: I0202 18:01:05.438411 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aedf0b15-0748-4ad7-afce-e421d046a585-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "aedf0b15-0748-4ad7-afce-e421d046a585" (UID: "aedf0b15-0748-4ad7-afce-e421d046a585"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 18:01:05 crc kubenswrapper[4858]: I0202 18:01:05.465896 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aedf0b15-0748-4ad7-afce-e421d046a585-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aedf0b15-0748-4ad7-afce-e421d046a585" (UID: "aedf0b15-0748-4ad7-afce-e421d046a585"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 18:01:05 crc kubenswrapper[4858]: I0202 18:01:05.488222 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aedf0b15-0748-4ad7-afce-e421d046a585-config-data" (OuterVolumeSpecName: "config-data") pod "aedf0b15-0748-4ad7-afce-e421d046a585" (UID: "aedf0b15-0748-4ad7-afce-e421d046a585"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 18:01:05 crc kubenswrapper[4858]: I0202 18:01:05.532137 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aedf0b15-0748-4ad7-afce-e421d046a585-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 18:01:05 crc kubenswrapper[4858]: I0202 18:01:05.532176 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9gl7\" (UniqueName: \"kubernetes.io/projected/aedf0b15-0748-4ad7-afce-e421d046a585-kube-api-access-n9gl7\") on node \"crc\" DevicePath \"\"" Feb 02 18:01:05 crc kubenswrapper[4858]: I0202 18:01:05.532190 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aedf0b15-0748-4ad7-afce-e421d046a585-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 18:01:05 crc kubenswrapper[4858]: I0202 18:01:05.532201 4858 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aedf0b15-0748-4ad7-afce-e421d046a585-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 18:01:05 crc kubenswrapper[4858]: I0202 18:01:05.806174 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29500921-5cpwt" Feb 02 18:01:05 crc kubenswrapper[4858]: I0202 18:01:05.806147 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29500921-5cpwt" event={"ID":"aedf0b15-0748-4ad7-afce-e421d046a585","Type":"ContainerDied","Data":"7f1bf2c59f36fa20be4b58f74fe73bc49194e7fb0525abbfb06efa4b5f0026c4"} Feb 02 18:01:05 crc kubenswrapper[4858]: I0202 18:01:05.806618 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f1bf2c59f36fa20be4b58f74fe73bc49194e7fb0525abbfb06efa4b5f0026c4" Feb 02 18:02:27 crc kubenswrapper[4858]: I0202 18:02:27.808019 4858 patch_prober.go:28] interesting pod/machine-config-daemon-lbvl2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 18:02:27 crc kubenswrapper[4858]: I0202 18:02:27.808709 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 18:02:57 crc kubenswrapper[4858]: I0202 18:02:57.807309 4858 patch_prober.go:28] interesting pod/machine-config-daemon-lbvl2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 18:02:57 crc kubenswrapper[4858]: I0202 18:02:57.807884 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 18:03:11 crc kubenswrapper[4858]: I0202 18:03:11.112005 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bpksj"] Feb 02 18:03:11 crc kubenswrapper[4858]: E0202 18:03:11.115365 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aedf0b15-0748-4ad7-afce-e421d046a585" containerName="keystone-cron" Feb 02 18:03:11 crc kubenswrapper[4858]: I0202 18:03:11.115400 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="aedf0b15-0748-4ad7-afce-e421d046a585" containerName="keystone-cron" Feb 02 18:03:11 crc kubenswrapper[4858]: I0202 18:03:11.115695 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="aedf0b15-0748-4ad7-afce-e421d046a585" containerName="keystone-cron" Feb 02 18:03:11 crc kubenswrapper[4858]: I0202 18:03:11.117844 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bpksj" Feb 02 18:03:11 crc kubenswrapper[4858]: I0202 18:03:11.125281 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bpksj"] Feb 02 18:03:11 crc kubenswrapper[4858]: I0202 18:03:11.184260 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f79255e-e635-4fbb-9142-e3a1a0e9373f-utilities\") pod \"community-operators-bpksj\" (UID: \"1f79255e-e635-4fbb-9142-e3a1a0e9373f\") " pod="openshift-marketplace/community-operators-bpksj" Feb 02 18:03:11 crc kubenswrapper[4858]: I0202 18:03:11.184481 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l5c4\" (UniqueName: \"kubernetes.io/projected/1f79255e-e635-4fbb-9142-e3a1a0e9373f-kube-api-access-7l5c4\") pod \"community-operators-bpksj\" (UID: \"1f79255e-e635-4fbb-9142-e3a1a0e9373f\") " pod="openshift-marketplace/community-operators-bpksj" Feb 02 18:03:11 crc kubenswrapper[4858]: I0202 18:03:11.185114 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f79255e-e635-4fbb-9142-e3a1a0e9373f-catalog-content\") pod \"community-operators-bpksj\" (UID: \"1f79255e-e635-4fbb-9142-e3a1a0e9373f\") " pod="openshift-marketplace/community-operators-bpksj" Feb 02 18:03:11 crc kubenswrapper[4858]: I0202 18:03:11.287164 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f79255e-e635-4fbb-9142-e3a1a0e9373f-catalog-content\") pod \"community-operators-bpksj\" (UID: \"1f79255e-e635-4fbb-9142-e3a1a0e9373f\") " pod="openshift-marketplace/community-operators-bpksj" Feb 02 18:03:11 crc kubenswrapper[4858]: I0202 18:03:11.287293 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f79255e-e635-4fbb-9142-e3a1a0e9373f-utilities\") pod \"community-operators-bpksj\" (UID: \"1f79255e-e635-4fbb-9142-e3a1a0e9373f\") " pod="openshift-marketplace/community-operators-bpksj" Feb 02 18:03:11 crc kubenswrapper[4858]: I0202 18:03:11.287338 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l5c4\" (UniqueName: \"kubernetes.io/projected/1f79255e-e635-4fbb-9142-e3a1a0e9373f-kube-api-access-7l5c4\") pod \"community-operators-bpksj\" (UID: \"1f79255e-e635-4fbb-9142-e3a1a0e9373f\") " pod="openshift-marketplace/community-operators-bpksj" Feb 02 18:03:11 crc kubenswrapper[4858]: I0202 18:03:11.287689 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f79255e-e635-4fbb-9142-e3a1a0e9373f-catalog-content\") pod \"community-operators-bpksj\" (UID: \"1f79255e-e635-4fbb-9142-e3a1a0e9373f\") " pod="openshift-marketplace/community-operators-bpksj" Feb 02 18:03:11 crc kubenswrapper[4858]: I0202 18:03:11.288885 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f79255e-e635-4fbb-9142-e3a1a0e9373f-utilities\") pod \"community-operators-bpksj\" (UID: \"1f79255e-e635-4fbb-9142-e3a1a0e9373f\") " pod="openshift-marketplace/community-operators-bpksj" Feb 02 18:03:11 crc kubenswrapper[4858]: I0202 18:03:11.307734 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l5c4\" (UniqueName: \"kubernetes.io/projected/1f79255e-e635-4fbb-9142-e3a1a0e9373f-kube-api-access-7l5c4\") pod \"community-operators-bpksj\" (UID: \"1f79255e-e635-4fbb-9142-e3a1a0e9373f\") " pod="openshift-marketplace/community-operators-bpksj" Feb 02 18:03:11 crc kubenswrapper[4858]: I0202 18:03:11.462382 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bpksj" Feb 02 18:03:11 crc kubenswrapper[4858]: I0202 18:03:11.949343 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bpksj"] Feb 02 18:03:12 crc kubenswrapper[4858]: I0202 18:03:12.599670 4858 generic.go:334] "Generic (PLEG): container finished" podID="1f79255e-e635-4fbb-9142-e3a1a0e9373f" containerID="0617f23644b43909cfb5952e21fca2b82244b5a8fea927689b6f0c1e79ba4720" exitCode=0 Feb 02 18:03:12 crc kubenswrapper[4858]: I0202 18:03:12.599739 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpksj" event={"ID":"1f79255e-e635-4fbb-9142-e3a1a0e9373f","Type":"ContainerDied","Data":"0617f23644b43909cfb5952e21fca2b82244b5a8fea927689b6f0c1e79ba4720"} Feb 02 18:03:12 crc kubenswrapper[4858]: I0202 18:03:12.600015 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpksj" event={"ID":"1f79255e-e635-4fbb-9142-e3a1a0e9373f","Type":"ContainerStarted","Data":"f389c34786b06c5dc15df2face2029a806b5f0a5e3edd4bd5f58c9129ac3e0e1"} Feb 02 18:03:14 crc kubenswrapper[4858]: I0202 18:03:14.627780 4858 generic.go:334] "Generic (PLEG): container finished" podID="1f79255e-e635-4fbb-9142-e3a1a0e9373f" containerID="b2ea54b5ecdc37c54a6ed7da02c2e8b41c2695d2b784884fc6b8c85c4ea3468e" exitCode=0 Feb 02 18:03:14 crc kubenswrapper[4858]: I0202 18:03:14.627880 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpksj" event={"ID":"1f79255e-e635-4fbb-9142-e3a1a0e9373f","Type":"ContainerDied","Data":"b2ea54b5ecdc37c54a6ed7da02c2e8b41c2695d2b784884fc6b8c85c4ea3468e"} Feb 02 18:03:15 crc kubenswrapper[4858]: I0202 18:03:15.642392 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpksj" event={"ID":"1f79255e-e635-4fbb-9142-e3a1a0e9373f","Type":"ContainerStarted","Data":"fb71549b53fced8f0b72612929577607037622c5ee4fabbf7efa1b5634b1e743"} Feb 02 18:03:15 crc kubenswrapper[4858]: I0202 18:03:15.687142 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bpksj" podStartSLOduration=2.219434195 podStartE2EDuration="4.68693216s" podCreationTimestamp="2026-02-02 18:03:11 +0000 UTC" firstStartedPulling="2026-02-02 18:03:12.601840711 +0000 UTC m=+2893.754255986" lastFinishedPulling="2026-02-02 18:03:15.069338676 +0000 UTC m=+2896.221753951" observedRunningTime="2026-02-02 18:03:15.67326285 +0000 UTC m=+2896.825678115" watchObservedRunningTime="2026-02-02 18:03:15.68693216 +0000 UTC m=+2896.839347425" Feb 02 18:03:16 crc kubenswrapper[4858]: I0202 18:03:16.286213 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gp9n9"] Feb 02 18:03:16 crc kubenswrapper[4858]: I0202 18:03:16.288844 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gp9n9" Feb 02 18:03:16 crc kubenswrapper[4858]: I0202 18:03:16.300457 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gp9n9"] Feb 02 18:03:16 crc kubenswrapper[4858]: I0202 18:03:16.491743 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qjwp\" (UniqueName: \"kubernetes.io/projected/ddd92075-1f43-4dda-9adf-e07ffb1882ae-kube-api-access-5qjwp\") pod \"certified-operators-gp9n9\" (UID: \"ddd92075-1f43-4dda-9adf-e07ffb1882ae\") " pod="openshift-marketplace/certified-operators-gp9n9" Feb 02 18:03:16 crc kubenswrapper[4858]: I0202 18:03:16.491809 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddd92075-1f43-4dda-9adf-e07ffb1882ae-utilities\") pod \"certified-operators-gp9n9\" (UID: \"ddd92075-1f43-4dda-9adf-e07ffb1882ae\") " pod="openshift-marketplace/certified-operators-gp9n9" Feb 02 18:03:16 crc kubenswrapper[4858]: I0202 18:03:16.493172 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddd92075-1f43-4dda-9adf-e07ffb1882ae-catalog-content\") pod \"certified-operators-gp9n9\" (UID: \"ddd92075-1f43-4dda-9adf-e07ffb1882ae\") " pod="openshift-marketplace/certified-operators-gp9n9" Feb 02 18:03:16 crc kubenswrapper[4858]: I0202 18:03:16.594734 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qjwp\" (UniqueName: \"kubernetes.io/projected/ddd92075-1f43-4dda-9adf-e07ffb1882ae-kube-api-access-5qjwp\") pod \"certified-operators-gp9n9\" (UID: \"ddd92075-1f43-4dda-9adf-e07ffb1882ae\") " pod="openshift-marketplace/certified-operators-gp9n9" Feb 02 18:03:16 crc kubenswrapper[4858]: I0202 18:03:16.594788 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddd92075-1f43-4dda-9adf-e07ffb1882ae-utilities\") pod \"certified-operators-gp9n9\" (UID: \"ddd92075-1f43-4dda-9adf-e07ffb1882ae\") " pod="openshift-marketplace/certified-operators-gp9n9" Feb 02 18:03:16 crc kubenswrapper[4858]: I0202 18:03:16.594950 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddd92075-1f43-4dda-9adf-e07ffb1882ae-catalog-content\") pod \"certified-operators-gp9n9\" (UID: \"ddd92075-1f43-4dda-9adf-e07ffb1882ae\") " pod="openshift-marketplace/certified-operators-gp9n9" Feb 02 18:03:16 crc kubenswrapper[4858]: I0202 18:03:16.595416 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddd92075-1f43-4dda-9adf-e07ffb1882ae-utilities\") pod \"certified-operators-gp9n9\" (UID: \"ddd92075-1f43-4dda-9adf-e07ffb1882ae\") " pod="openshift-marketplace/certified-operators-gp9n9" Feb 02 18:03:16 crc kubenswrapper[4858]: I0202 18:03:16.595448 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddd92075-1f43-4dda-9adf-e07ffb1882ae-catalog-content\") pod \"certified-operators-gp9n9\" (UID: \"ddd92075-1f43-4dda-9adf-e07ffb1882ae\") " pod="openshift-marketplace/certified-operators-gp9n9" Feb 02 18:03:16 crc kubenswrapper[4858]: I0202 18:03:16.614999 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qjwp\" (UniqueName: \"kubernetes.io/projected/ddd92075-1f43-4dda-9adf-e07ffb1882ae-kube-api-access-5qjwp\") pod \"certified-operators-gp9n9\" (UID: \"ddd92075-1f43-4dda-9adf-e07ffb1882ae\") " pod="openshift-marketplace/certified-operators-gp9n9" Feb 02 18:03:16 crc kubenswrapper[4858]: I0202 18:03:16.908206 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gp9n9" Feb 02 18:03:17 crc kubenswrapper[4858]: I0202 18:03:17.671893 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gp9n9"] Feb 02 18:03:17 crc kubenswrapper[4858]: W0202 18:03:17.683670 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddd92075_1f43_4dda_9adf_e07ffb1882ae.slice/crio-225117b6283ed50493ce2bd0a4eaf05000f1e85d5f6e77e8af5a81e98efed2ce WatchSource:0}: Error finding container 225117b6283ed50493ce2bd0a4eaf05000f1e85d5f6e77e8af5a81e98efed2ce: Status 404 returned error can't find the container with id 225117b6283ed50493ce2bd0a4eaf05000f1e85d5f6e77e8af5a81e98efed2ce Feb 02 18:03:18 crc kubenswrapper[4858]: I0202 18:03:18.673789 4858 generic.go:334] "Generic (PLEG): container finished" podID="ddd92075-1f43-4dda-9adf-e07ffb1882ae" containerID="069568c4b3dd8f73aa1d5ec34a6a21642229e0b7bed079fa469a3272b5883c22" exitCode=0 Feb 02 18:03:18 crc kubenswrapper[4858]: I0202 18:03:18.673849 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gp9n9" event={"ID":"ddd92075-1f43-4dda-9adf-e07ffb1882ae","Type":"ContainerDied","Data":"069568c4b3dd8f73aa1d5ec34a6a21642229e0b7bed079fa469a3272b5883c22"} Feb 02 18:03:18 crc kubenswrapper[4858]: I0202 18:03:18.674167 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gp9n9" event={"ID":"ddd92075-1f43-4dda-9adf-e07ffb1882ae","Type":"ContainerStarted","Data":"225117b6283ed50493ce2bd0a4eaf05000f1e85d5f6e77e8af5a81e98efed2ce"} Feb 02 18:03:19 crc kubenswrapper[4858]: I0202 18:03:19.690009 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gp9n9" event={"ID":"ddd92075-1f43-4dda-9adf-e07ffb1882ae","Type":"ContainerStarted","Data":"a06f84db8e295be250165b1f6f0135eda62ccfe08d676130b64f64ad20fcd915"} Feb 02 18:03:20 crc kubenswrapper[4858]: I0202 18:03:20.487632 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2fsmx"] Feb 02 18:03:20 crc kubenswrapper[4858]: I0202 18:03:20.491606 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2fsmx" Feb 02 18:03:20 crc kubenswrapper[4858]: I0202 18:03:20.506763 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2fsmx"] Feb 02 18:03:20 crc kubenswrapper[4858]: I0202 18:03:20.612461 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fddffd1c-49ff-408e-98bf-211f38c9004a-catalog-content\") pod \"redhat-operators-2fsmx\" (UID: \"fddffd1c-49ff-408e-98bf-211f38c9004a\") " pod="openshift-marketplace/redhat-operators-2fsmx" Feb 02 18:03:20 crc kubenswrapper[4858]: I0202 18:03:20.612589 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fddffd1c-49ff-408e-98bf-211f38c9004a-utilities\") pod \"redhat-operators-2fsmx\" (UID: \"fddffd1c-49ff-408e-98bf-211f38c9004a\") " pod="openshift-marketplace/redhat-operators-2fsmx" Feb 02 18:03:20 crc kubenswrapper[4858]: I0202 18:03:20.612732 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltt8m\" (UniqueName: \"kubernetes.io/projected/fddffd1c-49ff-408e-98bf-211f38c9004a-kube-api-access-ltt8m\") pod \"redhat-operators-2fsmx\" (UID: \"fddffd1c-49ff-408e-98bf-211f38c9004a\") " pod="openshift-marketplace/redhat-operators-2fsmx" Feb 02 18:03:20 crc kubenswrapper[4858]: I0202 18:03:20.701150 4858 generic.go:334] "Generic (PLEG): container finished" podID="ddd92075-1f43-4dda-9adf-e07ffb1882ae" containerID="a06f84db8e295be250165b1f6f0135eda62ccfe08d676130b64f64ad20fcd915" exitCode=0 Feb 02 18:03:20 crc kubenswrapper[4858]: I0202 18:03:20.701200 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gp9n9" event={"ID":"ddd92075-1f43-4dda-9adf-e07ffb1882ae","Type":"ContainerDied","Data":"a06f84db8e295be250165b1f6f0135eda62ccfe08d676130b64f64ad20fcd915"} Feb 02 18:03:20 crc kubenswrapper[4858]: I0202 18:03:20.714645 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fddffd1c-49ff-408e-98bf-211f38c9004a-catalog-content\") pod \"redhat-operators-2fsmx\" (UID: \"fddffd1c-49ff-408e-98bf-211f38c9004a\") " pod="openshift-marketplace/redhat-operators-2fsmx" Feb 02 18:03:20 crc kubenswrapper[4858]: I0202 18:03:20.715119 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fddffd1c-49ff-408e-98bf-211f38c9004a-utilities\") pod \"redhat-operators-2fsmx\" (UID: \"fddffd1c-49ff-408e-98bf-211f38c9004a\") " pod="openshift-marketplace/redhat-operators-2fsmx" Feb 02 18:03:20 crc kubenswrapper[4858]: I0202 18:03:20.715175 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltt8m\" (UniqueName: \"kubernetes.io/projected/fddffd1c-49ff-408e-98bf-211f38c9004a-kube-api-access-ltt8m\") pod \"redhat-operators-2fsmx\" (UID: \"fddffd1c-49ff-408e-98bf-211f38c9004a\") " pod="openshift-marketplace/redhat-operators-2fsmx" Feb 02 18:03:20 crc kubenswrapper[4858]: I0202 18:03:20.715218 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fddffd1c-49ff-408e-98bf-211f38c9004a-catalog-content\") pod \"redhat-operators-2fsmx\" (UID: \"fddffd1c-49ff-408e-98bf-211f38c9004a\") " pod="openshift-marketplace/redhat-operators-2fsmx" Feb 02 18:03:20 crc kubenswrapper[4858]: I0202 18:03:20.715626 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fddffd1c-49ff-408e-98bf-211f38c9004a-utilities\") pod \"redhat-operators-2fsmx\" (UID: \"fddffd1c-49ff-408e-98bf-211f38c9004a\") " pod="openshift-marketplace/redhat-operators-2fsmx" Feb 02 18:03:20 crc kubenswrapper[4858]: I0202 18:03:20.755716 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltt8m\" (UniqueName: \"kubernetes.io/projected/fddffd1c-49ff-408e-98bf-211f38c9004a-kube-api-access-ltt8m\") pod \"redhat-operators-2fsmx\" (UID: \"fddffd1c-49ff-408e-98bf-211f38c9004a\") " pod="openshift-marketplace/redhat-operators-2fsmx" Feb 02 18:03:20 crc kubenswrapper[4858]: I0202 18:03:20.808984 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2fsmx" Feb 02 18:03:21 crc kubenswrapper[4858]: I0202 18:03:21.278005 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2fsmx"] Feb 02 18:03:21 crc kubenswrapper[4858]: W0202 18:03:21.291921 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfddffd1c_49ff_408e_98bf_211f38c9004a.slice/crio-7a653391805df3a1d2b453b826743fde410210306c2b556a114ea7e0cc39f221 WatchSource:0}: Error finding container 7a653391805df3a1d2b453b826743fde410210306c2b556a114ea7e0cc39f221: Status 404 returned error can't find the container with id 7a653391805df3a1d2b453b826743fde410210306c2b556a114ea7e0cc39f221 Feb 02 18:03:21 crc kubenswrapper[4858]: I0202 18:03:21.463292 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bpksj" Feb 02 18:03:21 crc kubenswrapper[4858]: I0202 18:03:21.463348 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bpksj" Feb 02 18:03:21 crc kubenswrapper[4858]: I0202 18:03:21.526998 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bpksj" Feb 02 18:03:21 crc kubenswrapper[4858]: I0202 18:03:21.718191 4858 generic.go:334] "Generic (PLEG): container finished" podID="fddffd1c-49ff-408e-98bf-211f38c9004a" containerID="c3e410d5956455a6a48c7fd28256d1d07f6232f72f6c8a3474af2f4e88e7e2bd" exitCode=0 Feb 02 18:03:21 crc kubenswrapper[4858]: I0202 18:03:21.718281 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2fsmx" event={"ID":"fddffd1c-49ff-408e-98bf-211f38c9004a","Type":"ContainerDied","Data":"c3e410d5956455a6a48c7fd28256d1d07f6232f72f6c8a3474af2f4e88e7e2bd"} Feb 02 18:03:21 crc kubenswrapper[4858]: I0202 18:03:21.718733 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2fsmx" event={"ID":"fddffd1c-49ff-408e-98bf-211f38c9004a","Type":"ContainerStarted","Data":"7a653391805df3a1d2b453b826743fde410210306c2b556a114ea7e0cc39f221"} Feb 02 18:03:21 crc kubenswrapper[4858]: I0202 18:03:21.803248 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bpksj" Feb 02 18:03:22 crc kubenswrapper[4858]: I0202 18:03:22.731866 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gp9n9" event={"ID":"ddd92075-1f43-4dda-9adf-e07ffb1882ae","Type":"ContainerStarted","Data":"e24c65afc25089314e0ba30d08246da3177c0c9879327f807a7ad712208fd1e2"} Feb 02 18:03:22 crc kubenswrapper[4858]: I0202 18:03:22.734146 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2fsmx" event={"ID":"fddffd1c-49ff-408e-98bf-211f38c9004a","Type":"ContainerStarted","Data":"93528e2205a4f8a340129f70f5f8428c25e34ae62938d6f4aaaca0ef0382826f"} Feb 02 18:03:22 crc kubenswrapper[4858]: I0202 18:03:22.755956 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gp9n9" podStartSLOduration=3.799584336 podStartE2EDuration="6.755934364s" podCreationTimestamp="2026-02-02 18:03:16 +0000 UTC" firstStartedPulling="2026-02-02 18:03:18.67701536 +0000 UTC m=+2899.829430625" lastFinishedPulling="2026-02-02 18:03:21.633365388 +0000 UTC m=+2902.785780653" observedRunningTime="2026-02-02 18:03:22.751000263 +0000 UTC m=+2903.903415538" watchObservedRunningTime="2026-02-02 18:03:22.755934364 +0000 UTC m=+2903.908349629" Feb 02 18:03:23 crc kubenswrapper[4858]: I0202 18:03:23.688779 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bpksj"] Feb 02 18:03:23 crc kubenswrapper[4858]: I0202 18:03:23.746778 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bpksj" podUID="1f79255e-e635-4fbb-9142-e3a1a0e9373f" containerName="registry-server" containerID="cri-o://fb71549b53fced8f0b72612929577607037622c5ee4fabbf7efa1b5634b1e743" gracePeriod=2 Feb 02 18:03:25 crc kubenswrapper[4858]: I0202 18:03:25.767663 4858 generic.go:334] "Generic (PLEG): container finished" podID="fddffd1c-49ff-408e-98bf-211f38c9004a" containerID="93528e2205a4f8a340129f70f5f8428c25e34ae62938d6f4aaaca0ef0382826f" exitCode=0 Feb 02 18:03:25 crc kubenswrapper[4858]: I0202 18:03:25.767759 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2fsmx" event={"ID":"fddffd1c-49ff-408e-98bf-211f38c9004a","Type":"ContainerDied","Data":"93528e2205a4f8a340129f70f5f8428c25e34ae62938d6f4aaaca0ef0382826f"} Feb 02 18:03:26 crc kubenswrapper[4858]: I0202 18:03:26.782300 4858 generic.go:334] "Generic (PLEG): container finished" podID="1f79255e-e635-4fbb-9142-e3a1a0e9373f" containerID="fb71549b53fced8f0b72612929577607037622c5ee4fabbf7efa1b5634b1e743" exitCode=0 Feb 02 18:03:26 crc kubenswrapper[4858]: I0202 18:03:26.782360 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpksj" event={"ID":"1f79255e-e635-4fbb-9142-e3a1a0e9373f","Type":"ContainerDied","Data":"fb71549b53fced8f0b72612929577607037622c5ee4fabbf7efa1b5634b1e743"} Feb 02 18:03:26 crc kubenswrapper[4858]: I0202 18:03:26.924445 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gp9n9" Feb 02 18:03:26 crc kubenswrapper[4858]: I0202 18:03:26.925365 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gp9n9" Feb 02 18:03:26 crc kubenswrapper[4858]: I0202 18:03:26.983045 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gp9n9" Feb 02 18:03:27 crc kubenswrapper[4858]: I0202 18:03:27.190756 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bpksj" Feb 02 18:03:27 crc kubenswrapper[4858]: I0202 18:03:27.371188 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f79255e-e635-4fbb-9142-e3a1a0e9373f-utilities\") pod \"1f79255e-e635-4fbb-9142-e3a1a0e9373f\" (UID: \"1f79255e-e635-4fbb-9142-e3a1a0e9373f\") " Feb 02 18:03:27 crc kubenswrapper[4858]: I0202 18:03:27.371419 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7l5c4\" (UniqueName: \"kubernetes.io/projected/1f79255e-e635-4fbb-9142-e3a1a0e9373f-kube-api-access-7l5c4\") pod \"1f79255e-e635-4fbb-9142-e3a1a0e9373f\" (UID: \"1f79255e-e635-4fbb-9142-e3a1a0e9373f\") " Feb 02 18:03:27 crc kubenswrapper[4858]: I0202 18:03:27.371499 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f79255e-e635-4fbb-9142-e3a1a0e9373f-catalog-content\") pod \"1f79255e-e635-4fbb-9142-e3a1a0e9373f\" (UID: \"1f79255e-e635-4fbb-9142-e3a1a0e9373f\") " Feb 02 18:03:27 crc kubenswrapper[4858]: I0202 18:03:27.371783 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f79255e-e635-4fbb-9142-e3a1a0e9373f-utilities" (OuterVolumeSpecName: "utilities") pod "1f79255e-e635-4fbb-9142-e3a1a0e9373f" (UID: "1f79255e-e635-4fbb-9142-e3a1a0e9373f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 18:03:27 crc kubenswrapper[4858]: I0202 18:03:27.372421 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f79255e-e635-4fbb-9142-e3a1a0e9373f-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 18:03:27 crc kubenswrapper[4858]: I0202 18:03:27.387687 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f79255e-e635-4fbb-9142-e3a1a0e9373f-kube-api-access-7l5c4" (OuterVolumeSpecName: "kube-api-access-7l5c4") pod "1f79255e-e635-4fbb-9142-e3a1a0e9373f" (UID: "1f79255e-e635-4fbb-9142-e3a1a0e9373f"). InnerVolumeSpecName "kube-api-access-7l5c4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 18:03:27 crc kubenswrapper[4858]: I0202 18:03:27.418130 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f79255e-e635-4fbb-9142-e3a1a0e9373f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f79255e-e635-4fbb-9142-e3a1a0e9373f" (UID: "1f79255e-e635-4fbb-9142-e3a1a0e9373f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 18:03:27 crc kubenswrapper[4858]: I0202 18:03:27.474796 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f79255e-e635-4fbb-9142-e3a1a0e9373f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 18:03:27 crc kubenswrapper[4858]: I0202 18:03:27.474840 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7l5c4\" (UniqueName: \"kubernetes.io/projected/1f79255e-e635-4fbb-9142-e3a1a0e9373f-kube-api-access-7l5c4\") on node \"crc\" DevicePath \"\"" Feb 02 18:03:27 crc kubenswrapper[4858]: I0202 18:03:27.796905 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bpksj" Feb 02 18:03:27 crc kubenswrapper[4858]: I0202 18:03:27.796895 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpksj" event={"ID":"1f79255e-e635-4fbb-9142-e3a1a0e9373f","Type":"ContainerDied","Data":"f389c34786b06c5dc15df2face2029a806b5f0a5e3edd4bd5f58c9129ac3e0e1"} Feb 02 18:03:27 crc kubenswrapper[4858]: I0202 18:03:27.797099 4858 scope.go:117] "RemoveContainer" containerID="fb71549b53fced8f0b72612929577607037622c5ee4fabbf7efa1b5634b1e743" Feb 02 18:03:27 crc kubenswrapper[4858]: I0202 18:03:27.801350 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2fsmx" event={"ID":"fddffd1c-49ff-408e-98bf-211f38c9004a","Type":"ContainerStarted","Data":"4c1e092c741747274da2a574202a57f0cdb980bbba5a2d3fb62cc0006f4571a5"} Feb 02 18:03:27 crc kubenswrapper[4858]: I0202 18:03:27.807176 4858 patch_prober.go:28] interesting pod/machine-config-daemon-lbvl2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 18:03:27 crc kubenswrapper[4858]: I0202 18:03:27.807512 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 18:03:27 crc kubenswrapper[4858]: I0202 18:03:27.807558 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" Feb 02 18:03:27 crc kubenswrapper[4858]: I0202 18:03:27.808203 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c138fbcf7f05ffa20c7f25a2579c78076541e9f160625248f5a538afb5d97df4"} pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 18:03:27 crc kubenswrapper[4858]: I0202 18:03:27.808258 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" containerID="cri-o://c138fbcf7f05ffa20c7f25a2579c78076541e9f160625248f5a538afb5d97df4" gracePeriod=600 Feb 02 18:03:27 crc kubenswrapper[4858]: I0202 18:03:27.841616 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2fsmx" podStartSLOduration=2.714172744 podStartE2EDuration="7.841595662s" podCreationTimestamp="2026-02-02 18:03:20 +0000 UTC" firstStartedPulling="2026-02-02 18:03:21.720637627 +0000 UTC m=+2902.873052892" lastFinishedPulling="2026-02-02 18:03:26.848060545 +0000 UTC m=+2908.000475810" observedRunningTime="2026-02-02 18:03:27.830857346 +0000 UTC m=+2908.983272611" watchObservedRunningTime="2026-02-02 18:03:27.841595662 +0000 UTC m=+2908.994010927" Feb 02 18:03:27 crc kubenswrapper[4858]: I0202 18:03:27.855572 4858 scope.go:117] "RemoveContainer" containerID="b2ea54b5ecdc37c54a6ed7da02c2e8b41c2695d2b784884fc6b8c85c4ea3468e" Feb 02 18:03:27 crc kubenswrapper[4858]: I0202 18:03:27.858735 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bpksj"] Feb 02 18:03:27 crc kubenswrapper[4858]: I0202 18:03:27.867574 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bpksj"] Feb 02 18:03:27 crc kubenswrapper[4858]: I0202 18:03:27.879268 4858 scope.go:117] "RemoveContainer" containerID="0617f23644b43909cfb5952e21fca2b82244b5a8fea927689b6f0c1e79ba4720" Feb 02 18:03:27 crc kubenswrapper[4858]: I0202 18:03:27.901703 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gp9n9" Feb 02 18:03:28 crc kubenswrapper[4858]: I0202 18:03:28.415165 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f79255e-e635-4fbb-9142-e3a1a0e9373f" path="/var/lib/kubelet/pods/1f79255e-e635-4fbb-9142-e3a1a0e9373f/volumes" Feb 02 18:03:28 crc kubenswrapper[4858]: I0202 18:03:28.822057 4858 generic.go:334] "Generic (PLEG): container finished" podID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerID="c138fbcf7f05ffa20c7f25a2579c78076541e9f160625248f5a538afb5d97df4" exitCode=0 Feb 02 18:03:28 crc kubenswrapper[4858]: I0202 18:03:28.822152 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" event={"ID":"d03a4872-ca6a-4233-bdbf-b31f7890dc3e","Type":"ContainerDied","Data":"c138fbcf7f05ffa20c7f25a2579c78076541e9f160625248f5a538afb5d97df4"} Feb 02 18:03:28 crc kubenswrapper[4858]: I0202 18:03:28.822696 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" event={"ID":"d03a4872-ca6a-4233-bdbf-b31f7890dc3e","Type":"ContainerStarted","Data":"a5e81828dcdebe8f323750ad14d8ce95e7cafffc8c6a9eb621ef0306363cb195"} Feb 02 18:03:28 crc kubenswrapper[4858]: I0202 18:03:28.822772 4858 scope.go:117] "RemoveContainer" containerID="14c0595ecfc392eae362e391092ca630b3ab65f45c68441d2d3c09ae407972df" Feb 02 18:03:30 crc kubenswrapper[4858]: I0202 18:03:30.481682 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gp9n9"] Feb 02 18:03:30 crc kubenswrapper[4858]: I0202 18:03:30.809588 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2fsmx" Feb 02 18:03:30 crc kubenswrapper[4858]: I0202 18:03:30.809869 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2fsmx" Feb 02 18:03:30 crc kubenswrapper[4858]: I0202 18:03:30.856647 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gp9n9" podUID="ddd92075-1f43-4dda-9adf-e07ffb1882ae" containerName="registry-server" containerID="cri-o://e24c65afc25089314e0ba30d08246da3177c0c9879327f807a7ad712208fd1e2" gracePeriod=2 Feb 02 18:03:31 crc kubenswrapper[4858]: I0202 18:03:31.352654 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gp9n9" Feb 02 18:03:31 crc kubenswrapper[4858]: I0202 18:03:31.457577 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qjwp\" (UniqueName: \"kubernetes.io/projected/ddd92075-1f43-4dda-9adf-e07ffb1882ae-kube-api-access-5qjwp\") pod \"ddd92075-1f43-4dda-9adf-e07ffb1882ae\" (UID: \"ddd92075-1f43-4dda-9adf-e07ffb1882ae\") " Feb 02 18:03:31 crc kubenswrapper[4858]: I0202 18:03:31.457855 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddd92075-1f43-4dda-9adf-e07ffb1882ae-utilities\") pod \"ddd92075-1f43-4dda-9adf-e07ffb1882ae\" (UID: \"ddd92075-1f43-4dda-9adf-e07ffb1882ae\") " Feb 02 18:03:31 crc kubenswrapper[4858]: I0202 18:03:31.457911 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddd92075-1f43-4dda-9adf-e07ffb1882ae-catalog-content\") pod \"ddd92075-1f43-4dda-9adf-e07ffb1882ae\" (UID: \"ddd92075-1f43-4dda-9adf-e07ffb1882ae\") " Feb 02 18:03:31 crc kubenswrapper[4858]: I0202 18:03:31.460246 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddd92075-1f43-4dda-9adf-e07ffb1882ae-utilities" (OuterVolumeSpecName: "utilities") pod "ddd92075-1f43-4dda-9adf-e07ffb1882ae" (UID: "ddd92075-1f43-4dda-9adf-e07ffb1882ae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 18:03:31 crc kubenswrapper[4858]: I0202 18:03:31.472193 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddd92075-1f43-4dda-9adf-e07ffb1882ae-kube-api-access-5qjwp" (OuterVolumeSpecName: "kube-api-access-5qjwp") pod "ddd92075-1f43-4dda-9adf-e07ffb1882ae" (UID: "ddd92075-1f43-4dda-9adf-e07ffb1882ae"). InnerVolumeSpecName "kube-api-access-5qjwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 18:03:31 crc kubenswrapper[4858]: I0202 18:03:31.511570 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddd92075-1f43-4dda-9adf-e07ffb1882ae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ddd92075-1f43-4dda-9adf-e07ffb1882ae" (UID: "ddd92075-1f43-4dda-9adf-e07ffb1882ae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 18:03:31 crc kubenswrapper[4858]: I0202 18:03:31.561909 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddd92075-1f43-4dda-9adf-e07ffb1882ae-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 18:03:31 crc kubenswrapper[4858]: I0202 18:03:31.562005 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddd92075-1f43-4dda-9adf-e07ffb1882ae-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 18:03:31 crc kubenswrapper[4858]: I0202 18:03:31.562026 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qjwp\" (UniqueName: \"kubernetes.io/projected/ddd92075-1f43-4dda-9adf-e07ffb1882ae-kube-api-access-5qjwp\") on node \"crc\" DevicePath \"\"" Feb 02 18:03:31 crc kubenswrapper[4858]: I0202 18:03:31.868770 4858 generic.go:334] "Generic (PLEG): container finished" podID="ddd92075-1f43-4dda-9adf-e07ffb1882ae" containerID="e24c65afc25089314e0ba30d08246da3177c0c9879327f807a7ad712208fd1e2" exitCode=0 Feb 02 18:03:31 crc kubenswrapper[4858]: I0202 18:03:31.868820 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gp9n9" event={"ID":"ddd92075-1f43-4dda-9adf-e07ffb1882ae","Type":"ContainerDied","Data":"e24c65afc25089314e0ba30d08246da3177c0c9879327f807a7ad712208fd1e2"} Feb 02 18:03:31 crc kubenswrapper[4858]: I0202 18:03:31.868859 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gp9n9" event={"ID":"ddd92075-1f43-4dda-9adf-e07ffb1882ae","Type":"ContainerDied","Data":"225117b6283ed50493ce2bd0a4eaf05000f1e85d5f6e77e8af5a81e98efed2ce"} Feb 02 18:03:31 crc kubenswrapper[4858]: I0202 18:03:31.868883 4858 scope.go:117] "RemoveContainer" containerID="e24c65afc25089314e0ba30d08246da3177c0c9879327f807a7ad712208fd1e2" Feb 02 18:03:31 crc kubenswrapper[4858]: I0202 18:03:31.868952 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gp9n9" Feb 02 18:03:31 crc kubenswrapper[4858]: I0202 18:03:31.882159 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2fsmx" podUID="fddffd1c-49ff-408e-98bf-211f38c9004a" containerName="registry-server" probeResult="failure" output=< Feb 02 18:03:31 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Feb 02 18:03:31 crc kubenswrapper[4858]: > Feb 02 18:03:31 crc kubenswrapper[4858]: I0202 18:03:31.897406 4858 scope.go:117] "RemoveContainer" containerID="a06f84db8e295be250165b1f6f0135eda62ccfe08d676130b64f64ad20fcd915" Feb 02 18:03:31 crc kubenswrapper[4858]: I0202 18:03:31.907584 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gp9n9"] Feb 02 18:03:31 crc kubenswrapper[4858]: I0202 18:03:31.927125 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gp9n9"] Feb 02 18:03:31 crc kubenswrapper[4858]: I0202 18:03:31.927687 4858 scope.go:117] "RemoveContainer" containerID="069568c4b3dd8f73aa1d5ec34a6a21642229e0b7bed079fa469a3272b5883c22" Feb 02 18:03:31 crc kubenswrapper[4858]: I0202 18:03:31.989616 4858 scope.go:117] "RemoveContainer" containerID="e24c65afc25089314e0ba30d08246da3177c0c9879327f807a7ad712208fd1e2" Feb 02 18:03:31 crc kubenswrapper[4858]: E0202 18:03:31.992477 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e24c65afc25089314e0ba30d08246da3177c0c9879327f807a7ad712208fd1e2\": container with ID starting with e24c65afc25089314e0ba30d08246da3177c0c9879327f807a7ad712208fd1e2 not found: ID does not exist" containerID="e24c65afc25089314e0ba30d08246da3177c0c9879327f807a7ad712208fd1e2" Feb 02 18:03:31 crc kubenswrapper[4858]: I0202 18:03:31.992560 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e24c65afc25089314e0ba30d08246da3177c0c9879327f807a7ad712208fd1e2"} err="failed to get container status \"e24c65afc25089314e0ba30d08246da3177c0c9879327f807a7ad712208fd1e2\": rpc error: code = NotFound desc = could not find container \"e24c65afc25089314e0ba30d08246da3177c0c9879327f807a7ad712208fd1e2\": container with ID starting with e24c65afc25089314e0ba30d08246da3177c0c9879327f807a7ad712208fd1e2 not found: ID does not exist" Feb 02 18:03:31 crc kubenswrapper[4858]: I0202 18:03:31.992607 4858 scope.go:117] "RemoveContainer" containerID="a06f84db8e295be250165b1f6f0135eda62ccfe08d676130b64f64ad20fcd915" Feb 02 18:03:31 crc kubenswrapper[4858]: E0202 18:03:31.994634 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a06f84db8e295be250165b1f6f0135eda62ccfe08d676130b64f64ad20fcd915\": container with ID starting with a06f84db8e295be250165b1f6f0135eda62ccfe08d676130b64f64ad20fcd915 not found: ID does not exist" containerID="a06f84db8e295be250165b1f6f0135eda62ccfe08d676130b64f64ad20fcd915" Feb 02 18:03:31 crc kubenswrapper[4858]: I0202 18:03:31.994672 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a06f84db8e295be250165b1f6f0135eda62ccfe08d676130b64f64ad20fcd915"} err="failed to get container status \"a06f84db8e295be250165b1f6f0135eda62ccfe08d676130b64f64ad20fcd915\": rpc error: code = NotFound desc = could not find container \"a06f84db8e295be250165b1f6f0135eda62ccfe08d676130b64f64ad20fcd915\": container with ID starting with a06f84db8e295be250165b1f6f0135eda62ccfe08d676130b64f64ad20fcd915 not found: ID does not exist" Feb 02 18:03:31 crc kubenswrapper[4858]: I0202 18:03:31.994697 4858 scope.go:117] "RemoveContainer" containerID="069568c4b3dd8f73aa1d5ec34a6a21642229e0b7bed079fa469a3272b5883c22" Feb 02 18:03:31 crc kubenswrapper[4858]: E0202 18:03:31.995542 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"069568c4b3dd8f73aa1d5ec34a6a21642229e0b7bed079fa469a3272b5883c22\": container with ID starting with 069568c4b3dd8f73aa1d5ec34a6a21642229e0b7bed079fa469a3272b5883c22 not found: ID does not exist" containerID="069568c4b3dd8f73aa1d5ec34a6a21642229e0b7bed079fa469a3272b5883c22" Feb 02 18:03:31 crc kubenswrapper[4858]: I0202 18:03:31.995588 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"069568c4b3dd8f73aa1d5ec34a6a21642229e0b7bed079fa469a3272b5883c22"} err="failed to get container status \"069568c4b3dd8f73aa1d5ec34a6a21642229e0b7bed079fa469a3272b5883c22\": rpc error: code = NotFound desc = could not find container \"069568c4b3dd8f73aa1d5ec34a6a21642229e0b7bed079fa469a3272b5883c22\": container with ID starting with 069568c4b3dd8f73aa1d5ec34a6a21642229e0b7bed079fa469a3272b5883c22 not found: ID does not exist" Feb 02 18:03:32 crc kubenswrapper[4858]: I0202 18:03:32.412690 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddd92075-1f43-4dda-9adf-e07ffb1882ae" path="/var/lib/kubelet/pods/ddd92075-1f43-4dda-9adf-e07ffb1882ae/volumes" Feb 02 18:03:40 crc kubenswrapper[4858]: I0202 18:03:40.854613 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2fsmx" Feb 02 18:03:40 crc kubenswrapper[4858]: I0202 18:03:40.905546 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2fsmx" Feb 02 18:03:41 crc kubenswrapper[4858]: I0202 18:03:41.089118 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2fsmx"] Feb 02 18:03:41 crc kubenswrapper[4858]: I0202 18:03:41.959493 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2fsmx" podUID="fddffd1c-49ff-408e-98bf-211f38c9004a" containerName="registry-server" containerID="cri-o://4c1e092c741747274da2a574202a57f0cdb980bbba5a2d3fb62cc0006f4571a5" gracePeriod=2 Feb 02 18:03:42 crc kubenswrapper[4858]: I0202 18:03:42.434321 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2fsmx" Feb 02 18:03:42 crc kubenswrapper[4858]: I0202 18:03:42.561510 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fddffd1c-49ff-408e-98bf-211f38c9004a-catalog-content\") pod \"fddffd1c-49ff-408e-98bf-211f38c9004a\" (UID: \"fddffd1c-49ff-408e-98bf-211f38c9004a\") " Feb 02 18:03:42 crc kubenswrapper[4858]: I0202 18:03:42.561621 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fddffd1c-49ff-408e-98bf-211f38c9004a-utilities\") pod \"fddffd1c-49ff-408e-98bf-211f38c9004a\" (UID: \"fddffd1c-49ff-408e-98bf-211f38c9004a\") " Feb 02 18:03:42 crc kubenswrapper[4858]: I0202 18:03:42.561684 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltt8m\" (UniqueName: \"kubernetes.io/projected/fddffd1c-49ff-408e-98bf-211f38c9004a-kube-api-access-ltt8m\") pod \"fddffd1c-49ff-408e-98bf-211f38c9004a\" (UID: \"fddffd1c-49ff-408e-98bf-211f38c9004a\") " Feb 02 18:03:42 crc kubenswrapper[4858]: I0202 18:03:42.562717 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fddffd1c-49ff-408e-98bf-211f38c9004a-utilities" (OuterVolumeSpecName: "utilities") pod "fddffd1c-49ff-408e-98bf-211f38c9004a" (UID: "fddffd1c-49ff-408e-98bf-211f38c9004a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 18:03:42 crc kubenswrapper[4858]: I0202 18:03:42.568390 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fddffd1c-49ff-408e-98bf-211f38c9004a-kube-api-access-ltt8m" (OuterVolumeSpecName: "kube-api-access-ltt8m") pod "fddffd1c-49ff-408e-98bf-211f38c9004a" (UID: "fddffd1c-49ff-408e-98bf-211f38c9004a"). InnerVolumeSpecName "kube-api-access-ltt8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 18:03:42 crc kubenswrapper[4858]: I0202 18:03:42.664337 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fddffd1c-49ff-408e-98bf-211f38c9004a-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 18:03:42 crc kubenswrapper[4858]: I0202 18:03:42.664371 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltt8m\" (UniqueName: \"kubernetes.io/projected/fddffd1c-49ff-408e-98bf-211f38c9004a-kube-api-access-ltt8m\") on node \"crc\" DevicePath \"\"" Feb 02 18:03:42 crc kubenswrapper[4858]: I0202 18:03:42.687522 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fddffd1c-49ff-408e-98bf-211f38c9004a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fddffd1c-49ff-408e-98bf-211f38c9004a" (UID: "fddffd1c-49ff-408e-98bf-211f38c9004a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 18:03:42 crc kubenswrapper[4858]: I0202 18:03:42.766269 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fddffd1c-49ff-408e-98bf-211f38c9004a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 18:03:42 crc kubenswrapper[4858]: I0202 18:03:42.973487 4858 generic.go:334] "Generic (PLEG): container finished" podID="fddffd1c-49ff-408e-98bf-211f38c9004a" containerID="4c1e092c741747274da2a574202a57f0cdb980bbba5a2d3fb62cc0006f4571a5" exitCode=0 Feb 02 18:03:42 crc kubenswrapper[4858]: I0202 18:03:42.973543 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2fsmx" event={"ID":"fddffd1c-49ff-408e-98bf-211f38c9004a","Type":"ContainerDied","Data":"4c1e092c741747274da2a574202a57f0cdb980bbba5a2d3fb62cc0006f4571a5"} Feb 02 18:03:42 crc kubenswrapper[4858]: I0202 18:03:42.973574 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2fsmx" event={"ID":"fddffd1c-49ff-408e-98bf-211f38c9004a","Type":"ContainerDied","Data":"7a653391805df3a1d2b453b826743fde410210306c2b556a114ea7e0cc39f221"} Feb 02 18:03:42 crc kubenswrapper[4858]: I0202 18:03:42.973594 4858 scope.go:117] "RemoveContainer" containerID="4c1e092c741747274da2a574202a57f0cdb980bbba5a2d3fb62cc0006f4571a5" Feb 02 18:03:42 crc kubenswrapper[4858]: I0202 18:03:42.973755 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2fsmx" Feb 02 18:03:42 crc kubenswrapper[4858]: I0202 18:03:42.993985 4858 scope.go:117] "RemoveContainer" containerID="93528e2205a4f8a340129f70f5f8428c25e34ae62938d6f4aaaca0ef0382826f" Feb 02 18:03:43 crc kubenswrapper[4858]: I0202 18:03:43.013238 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2fsmx"] Feb 02 18:03:43 crc kubenswrapper[4858]: I0202 18:03:43.017228 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2fsmx"] Feb 02 18:03:43 crc kubenswrapper[4858]: I0202 18:03:43.044172 4858 scope.go:117] "RemoveContainer" containerID="c3e410d5956455a6a48c7fd28256d1d07f6232f72f6c8a3474af2f4e88e7e2bd" Feb 02 18:03:43 crc kubenswrapper[4858]: I0202 18:03:43.065254 4858 scope.go:117] "RemoveContainer" containerID="4c1e092c741747274da2a574202a57f0cdb980bbba5a2d3fb62cc0006f4571a5" Feb 02 18:03:43 crc kubenswrapper[4858]: E0202 18:03:43.065777 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c1e092c741747274da2a574202a57f0cdb980bbba5a2d3fb62cc0006f4571a5\": container with ID starting with 4c1e092c741747274da2a574202a57f0cdb980bbba5a2d3fb62cc0006f4571a5 not found: ID does not exist" containerID="4c1e092c741747274da2a574202a57f0cdb980bbba5a2d3fb62cc0006f4571a5" Feb 02 18:03:43 crc kubenswrapper[4858]: I0202 18:03:43.065827 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c1e092c741747274da2a574202a57f0cdb980bbba5a2d3fb62cc0006f4571a5"} err="failed to get container status \"4c1e092c741747274da2a574202a57f0cdb980bbba5a2d3fb62cc0006f4571a5\": rpc error: code = NotFound desc = could not find container \"4c1e092c741747274da2a574202a57f0cdb980bbba5a2d3fb62cc0006f4571a5\": container with ID starting with 4c1e092c741747274da2a574202a57f0cdb980bbba5a2d3fb62cc0006f4571a5 not found: ID does not exist" Feb 02 18:03:43 crc kubenswrapper[4858]: I0202 18:03:43.065862 4858 scope.go:117] "RemoveContainer" containerID="93528e2205a4f8a340129f70f5f8428c25e34ae62938d6f4aaaca0ef0382826f" Feb 02 18:03:43 crc kubenswrapper[4858]: E0202 18:03:43.066283 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93528e2205a4f8a340129f70f5f8428c25e34ae62938d6f4aaaca0ef0382826f\": container with ID starting with 93528e2205a4f8a340129f70f5f8428c25e34ae62938d6f4aaaca0ef0382826f not found: ID does not exist" containerID="93528e2205a4f8a340129f70f5f8428c25e34ae62938d6f4aaaca0ef0382826f" Feb 02 18:03:43 crc kubenswrapper[4858]: I0202 18:03:43.066319 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93528e2205a4f8a340129f70f5f8428c25e34ae62938d6f4aaaca0ef0382826f"} err="failed to get container status \"93528e2205a4f8a340129f70f5f8428c25e34ae62938d6f4aaaca0ef0382826f\": rpc error: code = NotFound desc = could not find container \"93528e2205a4f8a340129f70f5f8428c25e34ae62938d6f4aaaca0ef0382826f\": container with ID starting with 93528e2205a4f8a340129f70f5f8428c25e34ae62938d6f4aaaca0ef0382826f not found: ID does not exist" Feb 02 18:03:43 crc kubenswrapper[4858]: I0202 18:03:43.066336 4858 scope.go:117] "RemoveContainer" containerID="c3e410d5956455a6a48c7fd28256d1d07f6232f72f6c8a3474af2f4e88e7e2bd" Feb 02 18:03:43 crc kubenswrapper[4858]: E0202 18:03:43.066693 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3e410d5956455a6a48c7fd28256d1d07f6232f72f6c8a3474af2f4e88e7e2bd\": container with ID starting with c3e410d5956455a6a48c7fd28256d1d07f6232f72f6c8a3474af2f4e88e7e2bd not found: ID does not exist" containerID="c3e410d5956455a6a48c7fd28256d1d07f6232f72f6c8a3474af2f4e88e7e2bd" Feb 02 18:03:43 crc kubenswrapper[4858]: I0202 18:03:43.066721 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3e410d5956455a6a48c7fd28256d1d07f6232f72f6c8a3474af2f4e88e7e2bd"} err="failed to get container status \"c3e410d5956455a6a48c7fd28256d1d07f6232f72f6c8a3474af2f4e88e7e2bd\": rpc error: code = NotFound desc = could not find container \"c3e410d5956455a6a48c7fd28256d1d07f6232f72f6c8a3474af2f4e88e7e2bd\": container with ID starting with c3e410d5956455a6a48c7fd28256d1d07f6232f72f6c8a3474af2f4e88e7e2bd not found: ID does not exist" Feb 02 18:03:44 crc kubenswrapper[4858]: I0202 18:03:44.413112 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fddffd1c-49ff-408e-98bf-211f38c9004a" path="/var/lib/kubelet/pods/fddffd1c-49ff-408e-98bf-211f38c9004a/volumes" Feb 02 18:05:57 crc kubenswrapper[4858]: I0202 18:05:57.807347 4858 patch_prober.go:28] interesting pod/machine-config-daemon-lbvl2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 18:05:57 crc kubenswrapper[4858]: I0202 18:05:57.808024 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 18:06:27 crc kubenswrapper[4858]: I0202 18:06:27.807674 4858 patch_prober.go:28] interesting pod/machine-config-daemon-lbvl2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 18:06:27 crc kubenswrapper[4858]: I0202 18:06:27.808259 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 18:06:57 crc kubenswrapper[4858]: I0202 18:06:57.807224 4858 patch_prober.go:28] interesting pod/machine-config-daemon-lbvl2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 18:06:57 crc kubenswrapper[4858]: I0202 18:06:57.807724 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 18:06:57 crc kubenswrapper[4858]: I0202 18:06:57.807765 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" Feb 02 18:06:57 crc kubenswrapper[4858]: I0202 18:06:57.808505 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a5e81828dcdebe8f323750ad14d8ce95e7cafffc8c6a9eb621ef0306363cb195"} pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 18:06:57 crc kubenswrapper[4858]: I0202 18:06:57.808563 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" containerID="cri-o://a5e81828dcdebe8f323750ad14d8ce95e7cafffc8c6a9eb621ef0306363cb195" gracePeriod=600 Feb 02 18:06:57 crc kubenswrapper[4858]: E0202 18:06:57.948899 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:06:58 crc kubenswrapper[4858]: I0202 18:06:58.761107 4858 generic.go:334] "Generic (PLEG): container finished" podID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerID="a5e81828dcdebe8f323750ad14d8ce95e7cafffc8c6a9eb621ef0306363cb195" exitCode=0 Feb 02 18:06:58 crc kubenswrapper[4858]: I0202 18:06:58.761158 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" event={"ID":"d03a4872-ca6a-4233-bdbf-b31f7890dc3e","Type":"ContainerDied","Data":"a5e81828dcdebe8f323750ad14d8ce95e7cafffc8c6a9eb621ef0306363cb195"} Feb 02 18:06:58 crc kubenswrapper[4858]: I0202 18:06:58.761193 4858 scope.go:117] "RemoveContainer" containerID="c138fbcf7f05ffa20c7f25a2579c78076541e9f160625248f5a538afb5d97df4" Feb 02 18:06:58 crc kubenswrapper[4858]: I0202 18:06:58.762106 4858 scope.go:117] "RemoveContainer" containerID="a5e81828dcdebe8f323750ad14d8ce95e7cafffc8c6a9eb621ef0306363cb195" Feb 02 18:06:58 crc kubenswrapper[4858]: E0202 18:06:58.762482 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:07:13 crc kubenswrapper[4858]: I0202 18:07:13.400889 4858 scope.go:117] "RemoveContainer" containerID="a5e81828dcdebe8f323750ad14d8ce95e7cafffc8c6a9eb621ef0306363cb195" Feb 02 18:07:13 crc kubenswrapper[4858]: E0202 18:07:13.401776 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:07:28 crc kubenswrapper[4858]: I0202 18:07:28.400737 4858 scope.go:117] "RemoveContainer" containerID="a5e81828dcdebe8f323750ad14d8ce95e7cafffc8c6a9eb621ef0306363cb195" Feb 02 18:07:28 crc kubenswrapper[4858]: E0202 18:07:28.401582 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:07:40 crc kubenswrapper[4858]: I0202 18:07:40.407714 4858 scope.go:117] "RemoveContainer" containerID="a5e81828dcdebe8f323750ad14d8ce95e7cafffc8c6a9eb621ef0306363cb195" Feb 02 18:07:40 crc kubenswrapper[4858]: E0202 18:07:40.409489 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:07:54 crc kubenswrapper[4858]: I0202 18:07:54.400827 4858 scope.go:117] "RemoveContainer" containerID="a5e81828dcdebe8f323750ad14d8ce95e7cafffc8c6a9eb621ef0306363cb195" Feb 02 18:07:54 crc kubenswrapper[4858]: E0202 18:07:54.404348 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:08:06 crc kubenswrapper[4858]: I0202 18:08:06.400849 4858 scope.go:117] "RemoveContainer" containerID="a5e81828dcdebe8f323750ad14d8ce95e7cafffc8c6a9eb621ef0306363cb195" Feb 02 18:08:06 crc kubenswrapper[4858]: E0202 18:08:06.401652 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:08:19 crc kubenswrapper[4858]: I0202 18:08:19.402042 4858 scope.go:117] "RemoveContainer" containerID="a5e81828dcdebe8f323750ad14d8ce95e7cafffc8c6a9eb621ef0306363cb195" Feb 02 18:08:19 crc kubenswrapper[4858]: E0202 18:08:19.402906 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:08:30 crc kubenswrapper[4858]: I0202 18:08:30.406456 4858 scope.go:117] "RemoveContainer" containerID="a5e81828dcdebe8f323750ad14d8ce95e7cafffc8c6a9eb621ef0306363cb195" Feb 02 18:08:30 crc kubenswrapper[4858]: E0202 18:08:30.407148 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:08:37 crc kubenswrapper[4858]: I0202 18:08:37.639769 4858 generic.go:334] "Generic (PLEG): container finished" podID="6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52" containerID="e8d7e0c6469001c518d68df1f0960bde4554608fc3677dd9cca9ab5ebbbe9a46" exitCode=0 Feb 02 18:08:37 crc kubenswrapper[4858]: I0202 18:08:37.639862 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52","Type":"ContainerDied","Data":"e8d7e0c6469001c518d68df1f0960bde4554608fc3677dd9cca9ab5ebbbe9a46"} Feb 02 18:08:38 crc kubenswrapper[4858]: I0202 18:08:38.991455 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 02 18:08:39 crc kubenswrapper[4858]: I0202 18:08:39.104038 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-openstack-config\") pod \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\" (UID: \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\") " Feb 02 18:08:39 crc kubenswrapper[4858]: I0202 18:08:39.104176 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-ssh-key\") pod \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\" (UID: \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\") " Feb 02 18:08:39 crc kubenswrapper[4858]: I0202 18:08:39.104245 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-openstack-config-secret\") pod \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\" (UID: \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\") " Feb 02 18:08:39 crc kubenswrapper[4858]: I0202 18:08:39.104274 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4q2n\" (UniqueName: \"kubernetes.io/projected/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-kube-api-access-h4q2n\") pod \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\" (UID: \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\") " Feb 02 18:08:39 crc kubenswrapper[4858]: I0202 18:08:39.104332 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-test-operator-ephemeral-workdir\") pod \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\" (UID: \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\") " Feb 02 18:08:39 crc kubenswrapper[4858]: I0202 18:08:39.104370 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-config-data\") pod \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\" (UID: \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\") " Feb 02 18:08:39 crc kubenswrapper[4858]: I0202 18:08:39.104396 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-ca-certs\") pod \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\" (UID: \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\") " Feb 02 18:08:39 crc kubenswrapper[4858]: I0202 18:08:39.104440 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-test-operator-ephemeral-temporary\") pod \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\" (UID: \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\") " Feb 02 18:08:39 crc kubenswrapper[4858]: I0202 18:08:39.104510 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\" (UID: \"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52\") " Feb 02 18:08:39 crc kubenswrapper[4858]: I0202 18:08:39.105553 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52" (UID: "6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 18:08:39 crc kubenswrapper[4858]: I0202 18:08:39.106224 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-config-data" (OuterVolumeSpecName: "config-data") pod "6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52" (UID: "6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 18:08:39 crc kubenswrapper[4858]: I0202 18:08:39.110152 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-kube-api-access-h4q2n" (OuterVolumeSpecName: "kube-api-access-h4q2n") pod "6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52" (UID: "6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52"). InnerVolumeSpecName "kube-api-access-h4q2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 18:08:39 crc kubenswrapper[4858]: I0202 18:08:39.110148 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "test-operator-logs") pod "6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52" (UID: "6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 18:08:39 crc kubenswrapper[4858]: I0202 18:08:39.110220 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52" (UID: "6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 18:08:39 crc kubenswrapper[4858]: I0202 18:08:39.134056 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52" (UID: "6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 18:08:39 crc kubenswrapper[4858]: I0202 18:08:39.135384 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52" (UID: "6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 18:08:39 crc kubenswrapper[4858]: I0202 18:08:39.138030 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52" (UID: "6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 18:08:39 crc kubenswrapper[4858]: I0202 18:08:39.157621 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52" (UID: "6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 18:08:39 crc kubenswrapper[4858]: I0202 18:08:39.206675 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 02 18:08:39 crc kubenswrapper[4858]: I0202 18:08:39.206716 4858 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 02 18:08:39 crc kubenswrapper[4858]: I0202 18:08:39.206733 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4q2n\" (UniqueName: \"kubernetes.io/projected/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-kube-api-access-h4q2n\") on node \"crc\" DevicePath \"\"" Feb 02 18:08:39 crc kubenswrapper[4858]: I0202 18:08:39.206745 4858 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 02 18:08:39 crc kubenswrapper[4858]: I0202 18:08:39.206756 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 18:08:39 crc kubenswrapper[4858]: I0202 18:08:39.206766 4858 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 02 18:08:39 crc kubenswrapper[4858]: I0202 18:08:39.206776 4858 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 02 18:08:39 crc kubenswrapper[4858]: I0202 18:08:39.206815 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 02 18:08:39 crc kubenswrapper[4858]: I0202 18:08:39.206825 4858 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 02 18:08:39 crc kubenswrapper[4858]: I0202 18:08:39.230486 4858 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 02 18:08:39 crc kubenswrapper[4858]: I0202 18:08:39.309100 4858 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 02 18:08:39 crc kubenswrapper[4858]: I0202 18:08:39.660618 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52","Type":"ContainerDied","Data":"0b47451747e713f019d00adbd677ec869f17b7c71ae0c0712e0281c5aa7ed644"} Feb 02 18:08:39 crc kubenswrapper[4858]: I0202 18:08:39.660680 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b47451747e713f019d00adbd677ec869f17b7c71ae0c0712e0281c5aa7ed644" Feb 02 18:08:39 crc kubenswrapper[4858]: I0202 18:08:39.660768 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 02 18:08:41 crc kubenswrapper[4858]: I0202 18:08:41.400318 4858 scope.go:117] "RemoveContainer" containerID="a5e81828dcdebe8f323750ad14d8ce95e7cafffc8c6a9eb621ef0306363cb195" Feb 02 18:08:41 crc kubenswrapper[4858]: E0202 18:08:41.400892 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:08:46 crc kubenswrapper[4858]: I0202 18:08:46.449849 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 02 18:08:46 crc kubenswrapper[4858]: E0202 18:08:46.450933 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddd92075-1f43-4dda-9adf-e07ffb1882ae" containerName="extract-content" Feb 02 18:08:46 crc kubenswrapper[4858]: I0202 18:08:46.450952 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddd92075-1f43-4dda-9adf-e07ffb1882ae" containerName="extract-content" Feb 02 18:08:46 crc kubenswrapper[4858]: E0202 18:08:46.450990 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52" containerName="tempest-tests-tempest-tests-runner" Feb 02 18:08:46 crc kubenswrapper[4858]: I0202 18:08:46.450998 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52" containerName="tempest-tests-tempest-tests-runner" Feb 02 18:08:46 crc kubenswrapper[4858]: E0202 18:08:46.451013 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddd92075-1f43-4dda-9adf-e07ffb1882ae" containerName="registry-server" Feb 02 18:08:46 crc kubenswrapper[4858]: I0202 18:08:46.451021 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddd92075-1f43-4dda-9adf-e07ffb1882ae" containerName="registry-server" Feb 02 18:08:46 crc kubenswrapper[4858]: E0202 18:08:46.451034 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fddffd1c-49ff-408e-98bf-211f38c9004a" containerName="extract-utilities" Feb 02 18:08:46 crc kubenswrapper[4858]: I0202 18:08:46.451041 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="fddffd1c-49ff-408e-98bf-211f38c9004a" containerName="extract-utilities" Feb 02 18:08:46 crc kubenswrapper[4858]: E0202 18:08:46.451050 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fddffd1c-49ff-408e-98bf-211f38c9004a" containerName="extract-content" Feb 02 18:08:46 crc kubenswrapper[4858]: I0202 18:08:46.451056 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="fddffd1c-49ff-408e-98bf-211f38c9004a" containerName="extract-content" Feb 02 18:08:46 crc kubenswrapper[4858]: E0202 18:08:46.451065 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f79255e-e635-4fbb-9142-e3a1a0e9373f" containerName="extract-content" Feb 02 18:08:46 crc kubenswrapper[4858]: I0202 18:08:46.451070 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f79255e-e635-4fbb-9142-e3a1a0e9373f" containerName="extract-content" Feb 02 18:08:46 crc kubenswrapper[4858]: E0202 18:08:46.451082 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddd92075-1f43-4dda-9adf-e07ffb1882ae" containerName="extract-utilities" Feb 02 18:08:46 crc kubenswrapper[4858]: I0202 18:08:46.451087 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddd92075-1f43-4dda-9adf-e07ffb1882ae" containerName="extract-utilities" Feb 02 18:08:46 crc kubenswrapper[4858]: E0202 18:08:46.451100 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f79255e-e635-4fbb-9142-e3a1a0e9373f" containerName="registry-server" Feb 02 18:08:46 crc kubenswrapper[4858]: I0202 18:08:46.451105 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f79255e-e635-4fbb-9142-e3a1a0e9373f" containerName="registry-server" Feb 02 18:08:46 crc kubenswrapper[4858]: E0202 18:08:46.451113 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f79255e-e635-4fbb-9142-e3a1a0e9373f" containerName="extract-utilities" Feb 02 18:08:46 crc kubenswrapper[4858]: I0202 18:08:46.451119 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f79255e-e635-4fbb-9142-e3a1a0e9373f" containerName="extract-utilities" Feb 02 18:08:46 crc kubenswrapper[4858]: E0202 18:08:46.451126 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fddffd1c-49ff-408e-98bf-211f38c9004a" containerName="registry-server" Feb 02 18:08:46 crc kubenswrapper[4858]: I0202 18:08:46.451131 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="fddffd1c-49ff-408e-98bf-211f38c9004a" containerName="registry-server" Feb 02 18:08:46 crc kubenswrapper[4858]: I0202 18:08:46.451376 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f79255e-e635-4fbb-9142-e3a1a0e9373f" containerName="registry-server" Feb 02 18:08:46 crc kubenswrapper[4858]: I0202 18:08:46.451390 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="fddffd1c-49ff-408e-98bf-211f38c9004a" containerName="registry-server" Feb 02 18:08:46 crc kubenswrapper[4858]: I0202 18:08:46.451404 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52" containerName="tempest-tests-tempest-tests-runner" Feb 02 18:08:46 crc kubenswrapper[4858]: I0202 18:08:46.451419 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddd92075-1f43-4dda-9adf-e07ffb1882ae" containerName="registry-server" Feb 02 18:08:46 crc kubenswrapper[4858]: I0202 18:08:46.452148 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 18:08:46 crc kubenswrapper[4858]: I0202 18:08:46.454181 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-scf2w" Feb 02 18:08:46 crc kubenswrapper[4858]: I0202 18:08:46.459505 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 02 18:08:46 crc kubenswrapper[4858]: I0202 18:08:46.546708 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj7pr\" (UniqueName: \"kubernetes.io/projected/3a3e6ddb-d991-4bf6-a248-b333da853203-kube-api-access-fj7pr\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3a3e6ddb-d991-4bf6-a248-b333da853203\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 18:08:46 crc kubenswrapper[4858]: I0202 18:08:46.547078 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3a3e6ddb-d991-4bf6-a248-b333da853203\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 18:08:46 crc kubenswrapper[4858]: I0202 18:08:46.649499 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fj7pr\" (UniqueName: \"kubernetes.io/projected/3a3e6ddb-d991-4bf6-a248-b333da853203-kube-api-access-fj7pr\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3a3e6ddb-d991-4bf6-a248-b333da853203\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 18:08:46 crc kubenswrapper[4858]: I0202 18:08:46.649593 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3a3e6ddb-d991-4bf6-a248-b333da853203\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 18:08:46 crc kubenswrapper[4858]: I0202 18:08:46.650170 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3a3e6ddb-d991-4bf6-a248-b333da853203\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 18:08:46 crc kubenswrapper[4858]: I0202 18:08:46.673267 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fj7pr\" (UniqueName: \"kubernetes.io/projected/3a3e6ddb-d991-4bf6-a248-b333da853203-kube-api-access-fj7pr\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3a3e6ddb-d991-4bf6-a248-b333da853203\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 18:08:46 crc kubenswrapper[4858]: I0202 18:08:46.678751 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3a3e6ddb-d991-4bf6-a248-b333da853203\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 18:08:46 crc kubenswrapper[4858]: I0202 18:08:46.776367 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 02 18:08:47 crc kubenswrapper[4858]: I0202 18:08:47.201824 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 02 18:08:47 crc kubenswrapper[4858]: I0202 18:08:47.210952 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 18:08:47 crc kubenswrapper[4858]: I0202 18:08:47.738668 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"3a3e6ddb-d991-4bf6-a248-b333da853203","Type":"ContainerStarted","Data":"4127c628aee41617f170eb0ef9bb9417782fd84c4835ec595427af03b0f30790"} Feb 02 18:08:48 crc kubenswrapper[4858]: I0202 18:08:48.756326 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"3a3e6ddb-d991-4bf6-a248-b333da853203","Type":"ContainerStarted","Data":"96be209d51aa044d238b6848e4ee964c297006b9d5a0e9e18decec972b5256ea"} Feb 02 18:08:48 crc kubenswrapper[4858]: I0202 18:08:48.778380 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.803549883 podStartE2EDuration="2.778358056s" podCreationTimestamp="2026-02-02 18:08:46 +0000 UTC" firstStartedPulling="2026-02-02 18:08:47.210783023 +0000 UTC m=+3228.363198288" lastFinishedPulling="2026-02-02 18:08:48.185591196 +0000 UTC m=+3229.338006461" observedRunningTime="2026-02-02 18:08:48.771700976 +0000 UTC m=+3229.924116251" watchObservedRunningTime="2026-02-02 18:08:48.778358056 +0000 UTC m=+3229.930773321" Feb 02 18:08:53 crc kubenswrapper[4858]: I0202 18:08:53.401127 4858 scope.go:117] "RemoveContainer" containerID="a5e81828dcdebe8f323750ad14d8ce95e7cafffc8c6a9eb621ef0306363cb195" Feb 02 18:08:53 crc kubenswrapper[4858]: E0202 18:08:53.401929 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:09:08 crc kubenswrapper[4858]: I0202 18:09:08.400469 4858 scope.go:117] "RemoveContainer" containerID="a5e81828dcdebe8f323750ad14d8ce95e7cafffc8c6a9eb621ef0306363cb195" Feb 02 18:09:08 crc kubenswrapper[4858]: E0202 18:09:08.401360 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:09:11 crc kubenswrapper[4858]: I0202 18:09:11.566260 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7gxhs/must-gather-6qlvm"] Feb 02 18:09:11 crc kubenswrapper[4858]: I0202 18:09:11.568406 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7gxhs/must-gather-6qlvm" Feb 02 18:09:11 crc kubenswrapper[4858]: I0202 18:09:11.570867 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-7gxhs"/"openshift-service-ca.crt" Feb 02 18:09:11 crc kubenswrapper[4858]: I0202 18:09:11.572535 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-7gxhs"/"kube-root-ca.crt" Feb 02 18:09:11 crc kubenswrapper[4858]: I0202 18:09:11.579892 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-7gxhs/must-gather-6qlvm"] Feb 02 18:09:11 crc kubenswrapper[4858]: I0202 18:09:11.708920 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ae31fff3-88d5-48ac-8d5a-b732e04c158b-must-gather-output\") pod \"must-gather-6qlvm\" (UID: \"ae31fff3-88d5-48ac-8d5a-b732e04c158b\") " pod="openshift-must-gather-7gxhs/must-gather-6qlvm" Feb 02 18:09:11 crc kubenswrapper[4858]: I0202 18:09:11.709039 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnlkd\" (UniqueName: \"kubernetes.io/projected/ae31fff3-88d5-48ac-8d5a-b732e04c158b-kube-api-access-pnlkd\") pod \"must-gather-6qlvm\" (UID: \"ae31fff3-88d5-48ac-8d5a-b732e04c158b\") " pod="openshift-must-gather-7gxhs/must-gather-6qlvm" Feb 02 18:09:11 crc kubenswrapper[4858]: I0202 18:09:11.810478 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ae31fff3-88d5-48ac-8d5a-b732e04c158b-must-gather-output\") pod \"must-gather-6qlvm\" (UID: \"ae31fff3-88d5-48ac-8d5a-b732e04c158b\") " pod="openshift-must-gather-7gxhs/must-gather-6qlvm" Feb 02 18:09:11 crc kubenswrapper[4858]: I0202 18:09:11.810587 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnlkd\" (UniqueName: \"kubernetes.io/projected/ae31fff3-88d5-48ac-8d5a-b732e04c158b-kube-api-access-pnlkd\") pod \"must-gather-6qlvm\" (UID: \"ae31fff3-88d5-48ac-8d5a-b732e04c158b\") " pod="openshift-must-gather-7gxhs/must-gather-6qlvm" Feb 02 18:09:11 crc kubenswrapper[4858]: I0202 18:09:11.810964 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ae31fff3-88d5-48ac-8d5a-b732e04c158b-must-gather-output\") pod \"must-gather-6qlvm\" (UID: \"ae31fff3-88d5-48ac-8d5a-b732e04c158b\") " pod="openshift-must-gather-7gxhs/must-gather-6qlvm" Feb 02 18:09:11 crc kubenswrapper[4858]: I0202 18:09:11.838119 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnlkd\" (UniqueName: \"kubernetes.io/projected/ae31fff3-88d5-48ac-8d5a-b732e04c158b-kube-api-access-pnlkd\") pod \"must-gather-6qlvm\" (UID: \"ae31fff3-88d5-48ac-8d5a-b732e04c158b\") " pod="openshift-must-gather-7gxhs/must-gather-6qlvm" Feb 02 18:09:11 crc kubenswrapper[4858]: I0202 18:09:11.886557 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7gxhs/must-gather-6qlvm" Feb 02 18:09:12 crc kubenswrapper[4858]: I0202 18:09:12.353211 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-7gxhs/must-gather-6qlvm"] Feb 02 18:09:12 crc kubenswrapper[4858]: I0202 18:09:12.959905 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7gxhs/must-gather-6qlvm" event={"ID":"ae31fff3-88d5-48ac-8d5a-b732e04c158b","Type":"ContainerStarted","Data":"6138b0f04491be0ad7525d7f26299064d8bb4c074ae3bf48ccc947997bf79a88"} Feb 02 18:09:16 crc kubenswrapper[4858]: I0202 18:09:16.999118 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7gxhs/must-gather-6qlvm" event={"ID":"ae31fff3-88d5-48ac-8d5a-b732e04c158b","Type":"ContainerStarted","Data":"2526d249755c39b64700708da38287359b01ffe7e219e337eba79662f0a0d70d"} Feb 02 18:09:16 crc kubenswrapper[4858]: I0202 18:09:16.999724 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7gxhs/must-gather-6qlvm" event={"ID":"ae31fff3-88d5-48ac-8d5a-b732e04c158b","Type":"ContainerStarted","Data":"10e0d3ed8756b2b1d309fec98a2ee73afda935df5fde314675607f9087635e39"} Feb 02 18:09:17 crc kubenswrapper[4858]: I0202 18:09:17.021251 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-7gxhs/must-gather-6qlvm" podStartSLOduration=2.384955806 podStartE2EDuration="6.021230746s" podCreationTimestamp="2026-02-02 18:09:11 +0000 UTC" firstStartedPulling="2026-02-02 18:09:12.368951727 +0000 UTC m=+3253.521366992" lastFinishedPulling="2026-02-02 18:09:16.005226667 +0000 UTC m=+3257.157641932" observedRunningTime="2026-02-02 18:09:17.015500562 +0000 UTC m=+3258.167915827" watchObservedRunningTime="2026-02-02 18:09:17.021230746 +0000 UTC m=+3258.173646011" Feb 02 18:09:20 crc kubenswrapper[4858]: E0202 18:09:20.180862 4858 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.13:43924->38.102.83.13:36069: read tcp 38.102.83.13:43924->38.102.83.13:36069: read: connection reset by peer Feb 02 18:09:20 crc kubenswrapper[4858]: E0202 18:09:20.432817 4858 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.13:43952->38.102.83.13:36069: read tcp 38.102.83.13:43952->38.102.83.13:36069: read: connection reset by peer Feb 02 18:09:21 crc kubenswrapper[4858]: I0202 18:09:21.304431 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7gxhs/crc-debug-7bcr9"] Feb 02 18:09:21 crc kubenswrapper[4858]: I0202 18:09:21.306220 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7gxhs/crc-debug-7bcr9" Feb 02 18:09:21 crc kubenswrapper[4858]: I0202 18:09:21.308896 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-7gxhs"/"default-dockercfg-pn7s4" Feb 02 18:09:21 crc kubenswrapper[4858]: I0202 18:09:21.401201 4858 scope.go:117] "RemoveContainer" containerID="a5e81828dcdebe8f323750ad14d8ce95e7cafffc8c6a9eb621ef0306363cb195" Feb 02 18:09:21 crc kubenswrapper[4858]: E0202 18:09:21.401561 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:09:21 crc kubenswrapper[4858]: I0202 18:09:21.441265 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2c766c2f-d56b-4cff-b517-aceaaeb92321-host\") pod \"crc-debug-7bcr9\" (UID: \"2c766c2f-d56b-4cff-b517-aceaaeb92321\") " pod="openshift-must-gather-7gxhs/crc-debug-7bcr9" Feb 02 18:09:21 crc kubenswrapper[4858]: I0202 18:09:21.441839 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wqrr\" (UniqueName: \"kubernetes.io/projected/2c766c2f-d56b-4cff-b517-aceaaeb92321-kube-api-access-7wqrr\") pod \"crc-debug-7bcr9\" (UID: \"2c766c2f-d56b-4cff-b517-aceaaeb92321\") " pod="openshift-must-gather-7gxhs/crc-debug-7bcr9" Feb 02 18:09:21 crc kubenswrapper[4858]: I0202 18:09:21.543277 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wqrr\" (UniqueName: \"kubernetes.io/projected/2c766c2f-d56b-4cff-b517-aceaaeb92321-kube-api-access-7wqrr\") pod \"crc-debug-7bcr9\" (UID: \"2c766c2f-d56b-4cff-b517-aceaaeb92321\") " pod="openshift-must-gather-7gxhs/crc-debug-7bcr9" Feb 02 18:09:21 crc kubenswrapper[4858]: I0202 18:09:21.543515 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2c766c2f-d56b-4cff-b517-aceaaeb92321-host\") pod \"crc-debug-7bcr9\" (UID: \"2c766c2f-d56b-4cff-b517-aceaaeb92321\") " pod="openshift-must-gather-7gxhs/crc-debug-7bcr9" Feb 02 18:09:21 crc kubenswrapper[4858]: I0202 18:09:21.543885 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2c766c2f-d56b-4cff-b517-aceaaeb92321-host\") pod \"crc-debug-7bcr9\" (UID: \"2c766c2f-d56b-4cff-b517-aceaaeb92321\") " pod="openshift-must-gather-7gxhs/crc-debug-7bcr9" Feb 02 18:09:21 crc kubenswrapper[4858]: I0202 18:09:21.584381 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wqrr\" (UniqueName: \"kubernetes.io/projected/2c766c2f-d56b-4cff-b517-aceaaeb92321-kube-api-access-7wqrr\") pod \"crc-debug-7bcr9\" (UID: \"2c766c2f-d56b-4cff-b517-aceaaeb92321\") " pod="openshift-must-gather-7gxhs/crc-debug-7bcr9" Feb 02 18:09:21 crc kubenswrapper[4858]: I0202 18:09:21.625639 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7gxhs/crc-debug-7bcr9" Feb 02 18:09:21 crc kubenswrapper[4858]: I0202 18:09:21.749567 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7gxhs/crc-debug-7bcr9" event={"ID":"2c766c2f-d56b-4cff-b517-aceaaeb92321","Type":"ContainerStarted","Data":"8e4d7286d89ceb024ae1341f2718f1ff0832a6bcc2cd61e57173cfea4ab263b8"} Feb 02 18:09:33 crc kubenswrapper[4858]: I0202 18:09:33.514091 4858 scope.go:117] "RemoveContainer" containerID="a5e81828dcdebe8f323750ad14d8ce95e7cafffc8c6a9eb621ef0306363cb195" Feb 02 18:09:33 crc kubenswrapper[4858]: E0202 18:09:33.524212 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:09:34 crc kubenswrapper[4858]: I0202 18:09:34.891378 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7gxhs/crc-debug-7bcr9" event={"ID":"2c766c2f-d56b-4cff-b517-aceaaeb92321","Type":"ContainerStarted","Data":"0d7a1f687d71e8fb6a2f29239b0c478f9499d2ca05293e5b0a8c36022f97ae9d"} Feb 02 18:09:34 crc kubenswrapper[4858]: I0202 18:09:34.938552 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-7gxhs/crc-debug-7bcr9" podStartSLOduration=0.99199133 podStartE2EDuration="13.938532574s" podCreationTimestamp="2026-02-02 18:09:21 +0000 UTC" firstStartedPulling="2026-02-02 18:09:21.669448779 +0000 UTC m=+3262.821864044" lastFinishedPulling="2026-02-02 18:09:34.615990023 +0000 UTC m=+3275.768405288" observedRunningTime="2026-02-02 18:09:34.917539215 +0000 UTC m=+3276.069954490" watchObservedRunningTime="2026-02-02 18:09:34.938532574 +0000 UTC m=+3276.090947839" Feb 02 18:09:45 crc kubenswrapper[4858]: I0202 18:09:45.402490 4858 scope.go:117] "RemoveContainer" containerID="a5e81828dcdebe8f323750ad14d8ce95e7cafffc8c6a9eb621ef0306363cb195" Feb 02 18:09:45 crc kubenswrapper[4858]: E0202 18:09:45.403501 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:09:57 crc kubenswrapper[4858]: I0202 18:09:57.400887 4858 scope.go:117] "RemoveContainer" containerID="a5e81828dcdebe8f323750ad14d8ce95e7cafffc8c6a9eb621ef0306363cb195" Feb 02 18:09:57 crc kubenswrapper[4858]: E0202 18:09:57.401714 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:10:00 crc kubenswrapper[4858]: I0202 18:10:00.925827 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j6fqc"] Feb 02 18:10:00 crc kubenswrapper[4858]: I0202 18:10:00.928653 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j6fqc" Feb 02 18:10:00 crc kubenswrapper[4858]: I0202 18:10:00.934778 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j6fqc"] Feb 02 18:10:01 crc kubenswrapper[4858]: I0202 18:10:01.041284 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52vfl\" (UniqueName: \"kubernetes.io/projected/d3636986-f7a1-4210-a031-0e2bcc83a43c-kube-api-access-52vfl\") pod \"redhat-marketplace-j6fqc\" (UID: \"d3636986-f7a1-4210-a031-0e2bcc83a43c\") " pod="openshift-marketplace/redhat-marketplace-j6fqc" Feb 02 18:10:01 crc kubenswrapper[4858]: I0202 18:10:01.041337 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3636986-f7a1-4210-a031-0e2bcc83a43c-catalog-content\") pod \"redhat-marketplace-j6fqc\" (UID: \"d3636986-f7a1-4210-a031-0e2bcc83a43c\") " pod="openshift-marketplace/redhat-marketplace-j6fqc" Feb 02 18:10:01 crc kubenswrapper[4858]: I0202 18:10:01.041401 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3636986-f7a1-4210-a031-0e2bcc83a43c-utilities\") pod \"redhat-marketplace-j6fqc\" (UID: \"d3636986-f7a1-4210-a031-0e2bcc83a43c\") " pod="openshift-marketplace/redhat-marketplace-j6fqc" Feb 02 18:10:01 crc kubenswrapper[4858]: I0202 18:10:01.143538 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52vfl\" (UniqueName: \"kubernetes.io/projected/d3636986-f7a1-4210-a031-0e2bcc83a43c-kube-api-access-52vfl\") pod \"redhat-marketplace-j6fqc\" (UID: \"d3636986-f7a1-4210-a031-0e2bcc83a43c\") " pod="openshift-marketplace/redhat-marketplace-j6fqc" Feb 02 18:10:01 crc kubenswrapper[4858]: I0202 18:10:01.143609 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3636986-f7a1-4210-a031-0e2bcc83a43c-catalog-content\") pod \"redhat-marketplace-j6fqc\" (UID: \"d3636986-f7a1-4210-a031-0e2bcc83a43c\") " pod="openshift-marketplace/redhat-marketplace-j6fqc" Feb 02 18:10:01 crc kubenswrapper[4858]: I0202 18:10:01.143645 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3636986-f7a1-4210-a031-0e2bcc83a43c-utilities\") pod \"redhat-marketplace-j6fqc\" (UID: \"d3636986-f7a1-4210-a031-0e2bcc83a43c\") " pod="openshift-marketplace/redhat-marketplace-j6fqc" Feb 02 18:10:01 crc kubenswrapper[4858]: I0202 18:10:01.144258 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3636986-f7a1-4210-a031-0e2bcc83a43c-catalog-content\") pod \"redhat-marketplace-j6fqc\" (UID: \"d3636986-f7a1-4210-a031-0e2bcc83a43c\") " pod="openshift-marketplace/redhat-marketplace-j6fqc" Feb 02 18:10:01 crc kubenswrapper[4858]: I0202 18:10:01.144361 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3636986-f7a1-4210-a031-0e2bcc83a43c-utilities\") pod \"redhat-marketplace-j6fqc\" (UID: \"d3636986-f7a1-4210-a031-0e2bcc83a43c\") " pod="openshift-marketplace/redhat-marketplace-j6fqc" Feb 02 18:10:01 crc kubenswrapper[4858]: I0202 18:10:01.171368 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52vfl\" (UniqueName: \"kubernetes.io/projected/d3636986-f7a1-4210-a031-0e2bcc83a43c-kube-api-access-52vfl\") pod \"redhat-marketplace-j6fqc\" (UID: \"d3636986-f7a1-4210-a031-0e2bcc83a43c\") " pod="openshift-marketplace/redhat-marketplace-j6fqc" Feb 02 18:10:01 crc kubenswrapper[4858]: I0202 18:10:01.250203 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j6fqc" Feb 02 18:10:01 crc kubenswrapper[4858]: I0202 18:10:01.804882 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j6fqc"] Feb 02 18:10:02 crc kubenswrapper[4858]: I0202 18:10:02.138359 4858 generic.go:334] "Generic (PLEG): container finished" podID="d3636986-f7a1-4210-a031-0e2bcc83a43c" containerID="09bc6ab272c4358af020bb669e7998fa7d4451a8ce6a1114560a84313546c375" exitCode=0 Feb 02 18:10:02 crc kubenswrapper[4858]: I0202 18:10:02.138407 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j6fqc" event={"ID":"d3636986-f7a1-4210-a031-0e2bcc83a43c","Type":"ContainerDied","Data":"09bc6ab272c4358af020bb669e7998fa7d4451a8ce6a1114560a84313546c375"} Feb 02 18:10:02 crc kubenswrapper[4858]: I0202 18:10:02.138470 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j6fqc" event={"ID":"d3636986-f7a1-4210-a031-0e2bcc83a43c","Type":"ContainerStarted","Data":"a4aaa00f5d92a4e496c1b8d4b95c950c8867cab83b901d8ae991bb0db56e5109"} Feb 02 18:10:04 crc kubenswrapper[4858]: I0202 18:10:04.166483 4858 generic.go:334] "Generic (PLEG): container finished" podID="d3636986-f7a1-4210-a031-0e2bcc83a43c" containerID="b2619afebfdb3195679704e2cc911ee1b208f4dc37b3a8dbc2657a690f16177c" exitCode=0 Feb 02 18:10:04 crc kubenswrapper[4858]: I0202 18:10:04.167107 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j6fqc" event={"ID":"d3636986-f7a1-4210-a031-0e2bcc83a43c","Type":"ContainerDied","Data":"b2619afebfdb3195679704e2cc911ee1b208f4dc37b3a8dbc2657a690f16177c"} Feb 02 18:10:05 crc kubenswrapper[4858]: I0202 18:10:05.189793 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j6fqc" event={"ID":"d3636986-f7a1-4210-a031-0e2bcc83a43c","Type":"ContainerStarted","Data":"d66332f3e4c9dc7bc863c69b3da2ba70f5d3a47c9e2abbe2b1db1cc9acab4d9a"} Feb 02 18:10:05 crc kubenswrapper[4858]: I0202 18:10:05.210072 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j6fqc" podStartSLOduration=2.38962617 podStartE2EDuration="5.210054518s" podCreationTimestamp="2026-02-02 18:10:00 +0000 UTC" firstStartedPulling="2026-02-02 18:10:02.139894987 +0000 UTC m=+3303.292310252" lastFinishedPulling="2026-02-02 18:10:04.960323335 +0000 UTC m=+3306.112738600" observedRunningTime="2026-02-02 18:10:05.206606509 +0000 UTC m=+3306.359021784" watchObservedRunningTime="2026-02-02 18:10:05.210054518 +0000 UTC m=+3306.362469783" Feb 02 18:10:09 crc kubenswrapper[4858]: I0202 18:10:09.400740 4858 scope.go:117] "RemoveContainer" containerID="a5e81828dcdebe8f323750ad14d8ce95e7cafffc8c6a9eb621ef0306363cb195" Feb 02 18:10:09 crc kubenswrapper[4858]: E0202 18:10:09.401457 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:10:11 crc kubenswrapper[4858]: I0202 18:10:11.250622 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j6fqc" Feb 02 18:10:11 crc kubenswrapper[4858]: I0202 18:10:11.251086 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j6fqc" Feb 02 18:10:11 crc kubenswrapper[4858]: I0202 18:10:11.296486 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j6fqc" Feb 02 18:10:12 crc kubenswrapper[4858]: I0202 18:10:12.303576 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j6fqc" Feb 02 18:10:12 crc kubenswrapper[4858]: I0202 18:10:12.360416 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j6fqc"] Feb 02 18:10:14 crc kubenswrapper[4858]: I0202 18:10:14.263717 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j6fqc" podUID="d3636986-f7a1-4210-a031-0e2bcc83a43c" containerName="registry-server" containerID="cri-o://d66332f3e4c9dc7bc863c69b3da2ba70f5d3a47c9e2abbe2b1db1cc9acab4d9a" gracePeriod=2 Feb 02 18:10:14 crc kubenswrapper[4858]: I0202 18:10:14.795824 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j6fqc" Feb 02 18:10:14 crc kubenswrapper[4858]: I0202 18:10:14.920854 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3636986-f7a1-4210-a031-0e2bcc83a43c-catalog-content\") pod \"d3636986-f7a1-4210-a031-0e2bcc83a43c\" (UID: \"d3636986-f7a1-4210-a031-0e2bcc83a43c\") " Feb 02 18:10:14 crc kubenswrapper[4858]: I0202 18:10:14.921056 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52vfl\" (UniqueName: \"kubernetes.io/projected/d3636986-f7a1-4210-a031-0e2bcc83a43c-kube-api-access-52vfl\") pod \"d3636986-f7a1-4210-a031-0e2bcc83a43c\" (UID: \"d3636986-f7a1-4210-a031-0e2bcc83a43c\") " Feb 02 18:10:14 crc kubenswrapper[4858]: I0202 18:10:14.921105 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3636986-f7a1-4210-a031-0e2bcc83a43c-utilities\") pod \"d3636986-f7a1-4210-a031-0e2bcc83a43c\" (UID: \"d3636986-f7a1-4210-a031-0e2bcc83a43c\") " Feb 02 18:10:14 crc kubenswrapper[4858]: I0202 18:10:14.922097 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3636986-f7a1-4210-a031-0e2bcc83a43c-utilities" (OuterVolumeSpecName: "utilities") pod "d3636986-f7a1-4210-a031-0e2bcc83a43c" (UID: "d3636986-f7a1-4210-a031-0e2bcc83a43c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 18:10:14 crc kubenswrapper[4858]: I0202 18:10:14.926628 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3636986-f7a1-4210-a031-0e2bcc83a43c-kube-api-access-52vfl" (OuterVolumeSpecName: "kube-api-access-52vfl") pod "d3636986-f7a1-4210-a031-0e2bcc83a43c" (UID: "d3636986-f7a1-4210-a031-0e2bcc83a43c"). InnerVolumeSpecName "kube-api-access-52vfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 18:10:14 crc kubenswrapper[4858]: I0202 18:10:14.944206 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3636986-f7a1-4210-a031-0e2bcc83a43c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d3636986-f7a1-4210-a031-0e2bcc83a43c" (UID: "d3636986-f7a1-4210-a031-0e2bcc83a43c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 18:10:15 crc kubenswrapper[4858]: I0202 18:10:15.025527 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3636986-f7a1-4210-a031-0e2bcc83a43c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 18:10:15 crc kubenswrapper[4858]: I0202 18:10:15.025560 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52vfl\" (UniqueName: \"kubernetes.io/projected/d3636986-f7a1-4210-a031-0e2bcc83a43c-kube-api-access-52vfl\") on node \"crc\" DevicePath \"\"" Feb 02 18:10:15 crc kubenswrapper[4858]: I0202 18:10:15.025574 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3636986-f7a1-4210-a031-0e2bcc83a43c-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 18:10:15 crc kubenswrapper[4858]: I0202 18:10:15.275240 4858 generic.go:334] "Generic (PLEG): container finished" podID="d3636986-f7a1-4210-a031-0e2bcc83a43c" containerID="d66332f3e4c9dc7bc863c69b3da2ba70f5d3a47c9e2abbe2b1db1cc9acab4d9a" exitCode=0 Feb 02 18:10:15 crc kubenswrapper[4858]: I0202 18:10:15.275304 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j6fqc" event={"ID":"d3636986-f7a1-4210-a031-0e2bcc83a43c","Type":"ContainerDied","Data":"d66332f3e4c9dc7bc863c69b3da2ba70f5d3a47c9e2abbe2b1db1cc9acab4d9a"} Feb 02 18:10:15 crc kubenswrapper[4858]: I0202 18:10:15.275328 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j6fqc" Feb 02 18:10:15 crc kubenswrapper[4858]: I0202 18:10:15.275353 4858 scope.go:117] "RemoveContainer" containerID="d66332f3e4c9dc7bc863c69b3da2ba70f5d3a47c9e2abbe2b1db1cc9acab4d9a" Feb 02 18:10:15 crc kubenswrapper[4858]: I0202 18:10:15.275339 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j6fqc" event={"ID":"d3636986-f7a1-4210-a031-0e2bcc83a43c","Type":"ContainerDied","Data":"a4aaa00f5d92a4e496c1b8d4b95c950c8867cab83b901d8ae991bb0db56e5109"} Feb 02 18:10:15 crc kubenswrapper[4858]: I0202 18:10:15.304298 4858 scope.go:117] "RemoveContainer" containerID="b2619afebfdb3195679704e2cc911ee1b208f4dc37b3a8dbc2657a690f16177c" Feb 02 18:10:15 crc kubenswrapper[4858]: I0202 18:10:15.325443 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j6fqc"] Feb 02 18:10:15 crc kubenswrapper[4858]: I0202 18:10:15.333898 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j6fqc"] Feb 02 18:10:15 crc kubenswrapper[4858]: I0202 18:10:15.352768 4858 scope.go:117] "RemoveContainer" containerID="09bc6ab272c4358af020bb669e7998fa7d4451a8ce6a1114560a84313546c375" Feb 02 18:10:15 crc kubenswrapper[4858]: I0202 18:10:15.385855 4858 scope.go:117] "RemoveContainer" containerID="d66332f3e4c9dc7bc863c69b3da2ba70f5d3a47c9e2abbe2b1db1cc9acab4d9a" Feb 02 18:10:15 crc kubenswrapper[4858]: E0202 18:10:15.386422 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d66332f3e4c9dc7bc863c69b3da2ba70f5d3a47c9e2abbe2b1db1cc9acab4d9a\": container with ID starting with d66332f3e4c9dc7bc863c69b3da2ba70f5d3a47c9e2abbe2b1db1cc9acab4d9a not found: ID does not exist" containerID="d66332f3e4c9dc7bc863c69b3da2ba70f5d3a47c9e2abbe2b1db1cc9acab4d9a" Feb 02 18:10:15 crc kubenswrapper[4858]: I0202 18:10:15.386483 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d66332f3e4c9dc7bc863c69b3da2ba70f5d3a47c9e2abbe2b1db1cc9acab4d9a"} err="failed to get container status \"d66332f3e4c9dc7bc863c69b3da2ba70f5d3a47c9e2abbe2b1db1cc9acab4d9a\": rpc error: code = NotFound desc = could not find container \"d66332f3e4c9dc7bc863c69b3da2ba70f5d3a47c9e2abbe2b1db1cc9acab4d9a\": container with ID starting with d66332f3e4c9dc7bc863c69b3da2ba70f5d3a47c9e2abbe2b1db1cc9acab4d9a not found: ID does not exist" Feb 02 18:10:15 crc kubenswrapper[4858]: I0202 18:10:15.386518 4858 scope.go:117] "RemoveContainer" containerID="b2619afebfdb3195679704e2cc911ee1b208f4dc37b3a8dbc2657a690f16177c" Feb 02 18:10:15 crc kubenswrapper[4858]: E0202 18:10:15.386950 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2619afebfdb3195679704e2cc911ee1b208f4dc37b3a8dbc2657a690f16177c\": container with ID starting with b2619afebfdb3195679704e2cc911ee1b208f4dc37b3a8dbc2657a690f16177c not found: ID does not exist" containerID="b2619afebfdb3195679704e2cc911ee1b208f4dc37b3a8dbc2657a690f16177c" Feb 02 18:10:15 crc kubenswrapper[4858]: I0202 18:10:15.387031 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2619afebfdb3195679704e2cc911ee1b208f4dc37b3a8dbc2657a690f16177c"} err="failed to get container status \"b2619afebfdb3195679704e2cc911ee1b208f4dc37b3a8dbc2657a690f16177c\": rpc error: code = NotFound desc = could not find container \"b2619afebfdb3195679704e2cc911ee1b208f4dc37b3a8dbc2657a690f16177c\": container with ID starting with b2619afebfdb3195679704e2cc911ee1b208f4dc37b3a8dbc2657a690f16177c not found: ID does not exist" Feb 02 18:10:15 crc kubenswrapper[4858]: I0202 18:10:15.387051 4858 scope.go:117] "RemoveContainer" containerID="09bc6ab272c4358af020bb669e7998fa7d4451a8ce6a1114560a84313546c375" Feb 02 18:10:15 crc kubenswrapper[4858]: E0202 18:10:15.387656 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09bc6ab272c4358af020bb669e7998fa7d4451a8ce6a1114560a84313546c375\": container with ID starting with 09bc6ab272c4358af020bb669e7998fa7d4451a8ce6a1114560a84313546c375 not found: ID does not exist" containerID="09bc6ab272c4358af020bb669e7998fa7d4451a8ce6a1114560a84313546c375" Feb 02 18:10:15 crc kubenswrapper[4858]: I0202 18:10:15.387683 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09bc6ab272c4358af020bb669e7998fa7d4451a8ce6a1114560a84313546c375"} err="failed to get container status \"09bc6ab272c4358af020bb669e7998fa7d4451a8ce6a1114560a84313546c375\": rpc error: code = NotFound desc = could not find container \"09bc6ab272c4358af020bb669e7998fa7d4451a8ce6a1114560a84313546c375\": container with ID starting with 09bc6ab272c4358af020bb669e7998fa7d4451a8ce6a1114560a84313546c375 not found: ID does not exist" Feb 02 18:10:16 crc kubenswrapper[4858]: I0202 18:10:16.412463 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3636986-f7a1-4210-a031-0e2bcc83a43c" path="/var/lib/kubelet/pods/d3636986-f7a1-4210-a031-0e2bcc83a43c/volumes" Feb 02 18:10:17 crc kubenswrapper[4858]: I0202 18:10:17.297689 4858 generic.go:334] "Generic (PLEG): container finished" podID="2c766c2f-d56b-4cff-b517-aceaaeb92321" containerID="0d7a1f687d71e8fb6a2f29239b0c478f9499d2ca05293e5b0a8c36022f97ae9d" exitCode=0 Feb 02 18:10:17 crc kubenswrapper[4858]: I0202 18:10:17.297769 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7gxhs/crc-debug-7bcr9" event={"ID":"2c766c2f-d56b-4cff-b517-aceaaeb92321","Type":"ContainerDied","Data":"0d7a1f687d71e8fb6a2f29239b0c478f9499d2ca05293e5b0a8c36022f97ae9d"} Feb 02 18:10:18 crc kubenswrapper[4858]: I0202 18:10:18.405586 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7gxhs/crc-debug-7bcr9" Feb 02 18:10:18 crc kubenswrapper[4858]: I0202 18:10:18.442137 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7gxhs/crc-debug-7bcr9"] Feb 02 18:10:18 crc kubenswrapper[4858]: I0202 18:10:18.451095 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7gxhs/crc-debug-7bcr9"] Feb 02 18:10:18 crc kubenswrapper[4858]: I0202 18:10:18.591922 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2c766c2f-d56b-4cff-b517-aceaaeb92321-host\") pod \"2c766c2f-d56b-4cff-b517-aceaaeb92321\" (UID: \"2c766c2f-d56b-4cff-b517-aceaaeb92321\") " Feb 02 18:10:18 crc kubenswrapper[4858]: I0202 18:10:18.592053 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c766c2f-d56b-4cff-b517-aceaaeb92321-host" (OuterVolumeSpecName: "host") pod "2c766c2f-d56b-4cff-b517-aceaaeb92321" (UID: "2c766c2f-d56b-4cff-b517-aceaaeb92321"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 18:10:18 crc kubenswrapper[4858]: I0202 18:10:18.592111 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wqrr\" (UniqueName: \"kubernetes.io/projected/2c766c2f-d56b-4cff-b517-aceaaeb92321-kube-api-access-7wqrr\") pod \"2c766c2f-d56b-4cff-b517-aceaaeb92321\" (UID: \"2c766c2f-d56b-4cff-b517-aceaaeb92321\") " Feb 02 18:10:18 crc kubenswrapper[4858]: I0202 18:10:18.592544 4858 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2c766c2f-d56b-4cff-b517-aceaaeb92321-host\") on node \"crc\" DevicePath \"\"" Feb 02 18:10:18 crc kubenswrapper[4858]: I0202 18:10:18.608943 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c766c2f-d56b-4cff-b517-aceaaeb92321-kube-api-access-7wqrr" (OuterVolumeSpecName: "kube-api-access-7wqrr") pod "2c766c2f-d56b-4cff-b517-aceaaeb92321" (UID: "2c766c2f-d56b-4cff-b517-aceaaeb92321"). InnerVolumeSpecName "kube-api-access-7wqrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 18:10:18 crc kubenswrapper[4858]: I0202 18:10:18.694758 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wqrr\" (UniqueName: \"kubernetes.io/projected/2c766c2f-d56b-4cff-b517-aceaaeb92321-kube-api-access-7wqrr\") on node \"crc\" DevicePath \"\"" Feb 02 18:10:19 crc kubenswrapper[4858]: I0202 18:10:19.315561 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e4d7286d89ceb024ae1341f2718f1ff0832a6bcc2cd61e57173cfea4ab263b8" Feb 02 18:10:19 crc kubenswrapper[4858]: I0202 18:10:19.315633 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7gxhs/crc-debug-7bcr9" Feb 02 18:10:19 crc kubenswrapper[4858]: I0202 18:10:19.598708 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7gxhs/crc-debug-mhtms"] Feb 02 18:10:19 crc kubenswrapper[4858]: E0202 18:10:19.599101 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3636986-f7a1-4210-a031-0e2bcc83a43c" containerName="extract-content" Feb 02 18:10:19 crc kubenswrapper[4858]: I0202 18:10:19.599119 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3636986-f7a1-4210-a031-0e2bcc83a43c" containerName="extract-content" Feb 02 18:10:19 crc kubenswrapper[4858]: E0202 18:10:19.599138 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c766c2f-d56b-4cff-b517-aceaaeb92321" containerName="container-00" Feb 02 18:10:19 crc kubenswrapper[4858]: I0202 18:10:19.599146 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c766c2f-d56b-4cff-b517-aceaaeb92321" containerName="container-00" Feb 02 18:10:19 crc kubenswrapper[4858]: E0202 18:10:19.599157 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3636986-f7a1-4210-a031-0e2bcc83a43c" containerName="extract-utilities" Feb 02 18:10:19 crc kubenswrapper[4858]: I0202 18:10:19.599164 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3636986-f7a1-4210-a031-0e2bcc83a43c" containerName="extract-utilities" Feb 02 18:10:19 crc kubenswrapper[4858]: E0202 18:10:19.599190 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3636986-f7a1-4210-a031-0e2bcc83a43c" containerName="registry-server" Feb 02 18:10:19 crc kubenswrapper[4858]: I0202 18:10:19.599197 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3636986-f7a1-4210-a031-0e2bcc83a43c" containerName="registry-server" Feb 02 18:10:19 crc kubenswrapper[4858]: I0202 18:10:19.599430 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c766c2f-d56b-4cff-b517-aceaaeb92321" containerName="container-00" Feb 02 18:10:19 crc kubenswrapper[4858]: I0202 18:10:19.599448 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3636986-f7a1-4210-a031-0e2bcc83a43c" containerName="registry-server" Feb 02 18:10:19 crc kubenswrapper[4858]: I0202 18:10:19.600123 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7gxhs/crc-debug-mhtms" Feb 02 18:10:19 crc kubenswrapper[4858]: I0202 18:10:19.602857 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-7gxhs"/"default-dockercfg-pn7s4" Feb 02 18:10:19 crc kubenswrapper[4858]: I0202 18:10:19.713680 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxzx8\" (UniqueName: \"kubernetes.io/projected/8f17ec75-3efe-42fe-99f4-108182de633d-kube-api-access-pxzx8\") pod \"crc-debug-mhtms\" (UID: \"8f17ec75-3efe-42fe-99f4-108182de633d\") " pod="openshift-must-gather-7gxhs/crc-debug-mhtms" Feb 02 18:10:19 crc kubenswrapper[4858]: I0202 18:10:19.713759 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8f17ec75-3efe-42fe-99f4-108182de633d-host\") pod \"crc-debug-mhtms\" (UID: \"8f17ec75-3efe-42fe-99f4-108182de633d\") " pod="openshift-must-gather-7gxhs/crc-debug-mhtms" Feb 02 18:10:19 crc kubenswrapper[4858]: I0202 18:10:19.815283 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxzx8\" (UniqueName: \"kubernetes.io/projected/8f17ec75-3efe-42fe-99f4-108182de633d-kube-api-access-pxzx8\") pod \"crc-debug-mhtms\" (UID: \"8f17ec75-3efe-42fe-99f4-108182de633d\") " pod="openshift-must-gather-7gxhs/crc-debug-mhtms" Feb 02 18:10:19 crc kubenswrapper[4858]: I0202 18:10:19.816010 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8f17ec75-3efe-42fe-99f4-108182de633d-host\") pod \"crc-debug-mhtms\" (UID: \"8f17ec75-3efe-42fe-99f4-108182de633d\") " pod="openshift-must-gather-7gxhs/crc-debug-mhtms" Feb 02 18:10:19 crc kubenswrapper[4858]: I0202 18:10:19.816145 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8f17ec75-3efe-42fe-99f4-108182de633d-host\") pod \"crc-debug-mhtms\" (UID: \"8f17ec75-3efe-42fe-99f4-108182de633d\") " pod="openshift-must-gather-7gxhs/crc-debug-mhtms" Feb 02 18:10:19 crc kubenswrapper[4858]: I0202 18:10:19.837675 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxzx8\" (UniqueName: \"kubernetes.io/projected/8f17ec75-3efe-42fe-99f4-108182de633d-kube-api-access-pxzx8\") pod \"crc-debug-mhtms\" (UID: \"8f17ec75-3efe-42fe-99f4-108182de633d\") " pod="openshift-must-gather-7gxhs/crc-debug-mhtms" Feb 02 18:10:19 crc kubenswrapper[4858]: I0202 18:10:19.918626 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7gxhs/crc-debug-mhtms" Feb 02 18:10:20 crc kubenswrapper[4858]: I0202 18:10:20.328367 4858 generic.go:334] "Generic (PLEG): container finished" podID="8f17ec75-3efe-42fe-99f4-108182de633d" containerID="fc73e921135560ce00204e1a53b1e2fed3413e85c368975d5386c22e3d740dd2" exitCode=0 Feb 02 18:10:20 crc kubenswrapper[4858]: I0202 18:10:20.328453 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7gxhs/crc-debug-mhtms" event={"ID":"8f17ec75-3efe-42fe-99f4-108182de633d","Type":"ContainerDied","Data":"fc73e921135560ce00204e1a53b1e2fed3413e85c368975d5386c22e3d740dd2"} Feb 02 18:10:20 crc kubenswrapper[4858]: I0202 18:10:20.328670 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7gxhs/crc-debug-mhtms" event={"ID":"8f17ec75-3efe-42fe-99f4-108182de633d","Type":"ContainerStarted","Data":"98e770f3a5e704044daef1e520560adf08000925df8258812775739697593188"} Feb 02 18:10:20 crc kubenswrapper[4858]: I0202 18:10:20.412357 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c766c2f-d56b-4cff-b517-aceaaeb92321" path="/var/lib/kubelet/pods/2c766c2f-d56b-4cff-b517-aceaaeb92321/volumes" Feb 02 18:10:20 crc kubenswrapper[4858]: I0202 18:10:20.784449 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7gxhs/crc-debug-mhtms"] Feb 02 18:10:20 crc kubenswrapper[4858]: I0202 18:10:20.792803 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7gxhs/crc-debug-mhtms"] Feb 02 18:10:21 crc kubenswrapper[4858]: I0202 18:10:21.400742 4858 scope.go:117] "RemoveContainer" containerID="a5e81828dcdebe8f323750ad14d8ce95e7cafffc8c6a9eb621ef0306363cb195" Feb 02 18:10:21 crc kubenswrapper[4858]: E0202 18:10:21.401016 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:10:21 crc kubenswrapper[4858]: I0202 18:10:21.430302 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7gxhs/crc-debug-mhtms" Feb 02 18:10:21 crc kubenswrapper[4858]: I0202 18:10:21.557888 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8f17ec75-3efe-42fe-99f4-108182de633d-host\") pod \"8f17ec75-3efe-42fe-99f4-108182de633d\" (UID: \"8f17ec75-3efe-42fe-99f4-108182de633d\") " Feb 02 18:10:21 crc kubenswrapper[4858]: I0202 18:10:21.558036 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f17ec75-3efe-42fe-99f4-108182de633d-host" (OuterVolumeSpecName: "host") pod "8f17ec75-3efe-42fe-99f4-108182de633d" (UID: "8f17ec75-3efe-42fe-99f4-108182de633d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 18:10:21 crc kubenswrapper[4858]: I0202 18:10:21.558114 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxzx8\" (UniqueName: \"kubernetes.io/projected/8f17ec75-3efe-42fe-99f4-108182de633d-kube-api-access-pxzx8\") pod \"8f17ec75-3efe-42fe-99f4-108182de633d\" (UID: \"8f17ec75-3efe-42fe-99f4-108182de633d\") " Feb 02 18:10:21 crc kubenswrapper[4858]: I0202 18:10:21.559021 4858 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8f17ec75-3efe-42fe-99f4-108182de633d-host\") on node \"crc\" DevicePath \"\"" Feb 02 18:10:21 crc kubenswrapper[4858]: I0202 18:10:21.563577 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f17ec75-3efe-42fe-99f4-108182de633d-kube-api-access-pxzx8" (OuterVolumeSpecName: "kube-api-access-pxzx8") pod "8f17ec75-3efe-42fe-99f4-108182de633d" (UID: "8f17ec75-3efe-42fe-99f4-108182de633d"). InnerVolumeSpecName "kube-api-access-pxzx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 18:10:21 crc kubenswrapper[4858]: I0202 18:10:21.660908 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxzx8\" (UniqueName: \"kubernetes.io/projected/8f17ec75-3efe-42fe-99f4-108182de633d-kube-api-access-pxzx8\") on node \"crc\" DevicePath \"\"" Feb 02 18:10:21 crc kubenswrapper[4858]: I0202 18:10:21.947653 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7gxhs/crc-debug-w4c2z"] Feb 02 18:10:21 crc kubenswrapper[4858]: E0202 18:10:21.948087 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f17ec75-3efe-42fe-99f4-108182de633d" containerName="container-00" Feb 02 18:10:21 crc kubenswrapper[4858]: I0202 18:10:21.948103 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f17ec75-3efe-42fe-99f4-108182de633d" containerName="container-00" Feb 02 18:10:21 crc kubenswrapper[4858]: I0202 18:10:21.948339 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f17ec75-3efe-42fe-99f4-108182de633d" containerName="container-00" Feb 02 18:10:21 crc kubenswrapper[4858]: I0202 18:10:21.949002 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7gxhs/crc-debug-w4c2z" Feb 02 18:10:22 crc kubenswrapper[4858]: I0202 18:10:22.067359 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/577b3dd1-7c64-4413-81a1-d7705aa6393f-host\") pod \"crc-debug-w4c2z\" (UID: \"577b3dd1-7c64-4413-81a1-d7705aa6393f\") " pod="openshift-must-gather-7gxhs/crc-debug-w4c2z" Feb 02 18:10:22 crc kubenswrapper[4858]: I0202 18:10:22.067551 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tps8k\" (UniqueName: \"kubernetes.io/projected/577b3dd1-7c64-4413-81a1-d7705aa6393f-kube-api-access-tps8k\") pod \"crc-debug-w4c2z\" (UID: \"577b3dd1-7c64-4413-81a1-d7705aa6393f\") " pod="openshift-must-gather-7gxhs/crc-debug-w4c2z" Feb 02 18:10:22 crc kubenswrapper[4858]: I0202 18:10:22.169440 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tps8k\" (UniqueName: \"kubernetes.io/projected/577b3dd1-7c64-4413-81a1-d7705aa6393f-kube-api-access-tps8k\") pod \"crc-debug-w4c2z\" (UID: \"577b3dd1-7c64-4413-81a1-d7705aa6393f\") " pod="openshift-must-gather-7gxhs/crc-debug-w4c2z" Feb 02 18:10:22 crc kubenswrapper[4858]: I0202 18:10:22.169544 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/577b3dd1-7c64-4413-81a1-d7705aa6393f-host\") pod \"crc-debug-w4c2z\" (UID: \"577b3dd1-7c64-4413-81a1-d7705aa6393f\") " pod="openshift-must-gather-7gxhs/crc-debug-w4c2z" Feb 02 18:10:22 crc kubenswrapper[4858]: I0202 18:10:22.169682 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/577b3dd1-7c64-4413-81a1-d7705aa6393f-host\") pod \"crc-debug-w4c2z\" (UID: \"577b3dd1-7c64-4413-81a1-d7705aa6393f\") " pod="openshift-must-gather-7gxhs/crc-debug-w4c2z" Feb 02 18:10:22 crc kubenswrapper[4858]: I0202 18:10:22.187519 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tps8k\" (UniqueName: \"kubernetes.io/projected/577b3dd1-7c64-4413-81a1-d7705aa6393f-kube-api-access-tps8k\") pod \"crc-debug-w4c2z\" (UID: \"577b3dd1-7c64-4413-81a1-d7705aa6393f\") " pod="openshift-must-gather-7gxhs/crc-debug-w4c2z" Feb 02 18:10:22 crc kubenswrapper[4858]: I0202 18:10:22.266099 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7gxhs/crc-debug-w4c2z" Feb 02 18:10:22 crc kubenswrapper[4858]: W0202 18:10:22.302941 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod577b3dd1_7c64_4413_81a1_d7705aa6393f.slice/crio-243a58bed7e06e11524e62e89f8d1dfe68025aeb67fdeebd0a1318a7baf5435f WatchSource:0}: Error finding container 243a58bed7e06e11524e62e89f8d1dfe68025aeb67fdeebd0a1318a7baf5435f: Status 404 returned error can't find the container with id 243a58bed7e06e11524e62e89f8d1dfe68025aeb67fdeebd0a1318a7baf5435f Feb 02 18:10:22 crc kubenswrapper[4858]: I0202 18:10:22.347044 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7gxhs/crc-debug-w4c2z" event={"ID":"577b3dd1-7c64-4413-81a1-d7705aa6393f","Type":"ContainerStarted","Data":"243a58bed7e06e11524e62e89f8d1dfe68025aeb67fdeebd0a1318a7baf5435f"} Feb 02 18:10:22 crc kubenswrapper[4858]: I0202 18:10:22.349158 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98e770f3a5e704044daef1e520560adf08000925df8258812775739697593188" Feb 02 18:10:22 crc kubenswrapper[4858]: I0202 18:10:22.349347 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7gxhs/crc-debug-mhtms" Feb 02 18:10:22 crc kubenswrapper[4858]: I0202 18:10:22.412393 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f17ec75-3efe-42fe-99f4-108182de633d" path="/var/lib/kubelet/pods/8f17ec75-3efe-42fe-99f4-108182de633d/volumes" Feb 02 18:10:23 crc kubenswrapper[4858]: I0202 18:10:23.364380 4858 generic.go:334] "Generic (PLEG): container finished" podID="577b3dd1-7c64-4413-81a1-d7705aa6393f" containerID="b19725ff5b6fd4091d45be9d21ac48f27c39c99a59c8b85eaefd0dc32fb088b4" exitCode=0 Feb 02 18:10:23 crc kubenswrapper[4858]: I0202 18:10:23.364455 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7gxhs/crc-debug-w4c2z" event={"ID":"577b3dd1-7c64-4413-81a1-d7705aa6393f","Type":"ContainerDied","Data":"b19725ff5b6fd4091d45be9d21ac48f27c39c99a59c8b85eaefd0dc32fb088b4"} Feb 02 18:10:23 crc kubenswrapper[4858]: I0202 18:10:23.408154 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7gxhs/crc-debug-w4c2z"] Feb 02 18:10:23 crc kubenswrapper[4858]: I0202 18:10:23.417211 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7gxhs/crc-debug-w4c2z"] Feb 02 18:10:24 crc kubenswrapper[4858]: I0202 18:10:24.513458 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7gxhs/crc-debug-w4c2z" Feb 02 18:10:24 crc kubenswrapper[4858]: I0202 18:10:24.618784 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tps8k\" (UniqueName: \"kubernetes.io/projected/577b3dd1-7c64-4413-81a1-d7705aa6393f-kube-api-access-tps8k\") pod \"577b3dd1-7c64-4413-81a1-d7705aa6393f\" (UID: \"577b3dd1-7c64-4413-81a1-d7705aa6393f\") " Feb 02 18:10:24 crc kubenswrapper[4858]: I0202 18:10:24.618960 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/577b3dd1-7c64-4413-81a1-d7705aa6393f-host\") pod \"577b3dd1-7c64-4413-81a1-d7705aa6393f\" (UID: \"577b3dd1-7c64-4413-81a1-d7705aa6393f\") " Feb 02 18:10:24 crc kubenswrapper[4858]: I0202 18:10:24.619458 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/577b3dd1-7c64-4413-81a1-d7705aa6393f-host" (OuterVolumeSpecName: "host") pod "577b3dd1-7c64-4413-81a1-d7705aa6393f" (UID: "577b3dd1-7c64-4413-81a1-d7705aa6393f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 18:10:24 crc kubenswrapper[4858]: I0202 18:10:24.625600 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/577b3dd1-7c64-4413-81a1-d7705aa6393f-kube-api-access-tps8k" (OuterVolumeSpecName: "kube-api-access-tps8k") pod "577b3dd1-7c64-4413-81a1-d7705aa6393f" (UID: "577b3dd1-7c64-4413-81a1-d7705aa6393f"). InnerVolumeSpecName "kube-api-access-tps8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 18:10:24 crc kubenswrapper[4858]: I0202 18:10:24.721079 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tps8k\" (UniqueName: \"kubernetes.io/projected/577b3dd1-7c64-4413-81a1-d7705aa6393f-kube-api-access-tps8k\") on node \"crc\" DevicePath \"\"" Feb 02 18:10:24 crc kubenswrapper[4858]: I0202 18:10:24.721120 4858 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/577b3dd1-7c64-4413-81a1-d7705aa6393f-host\") on node \"crc\" DevicePath \"\"" Feb 02 18:10:25 crc kubenswrapper[4858]: I0202 18:10:25.409592 4858 scope.go:117] "RemoveContainer" containerID="b19725ff5b6fd4091d45be9d21ac48f27c39c99a59c8b85eaefd0dc32fb088b4" Feb 02 18:10:25 crc kubenswrapper[4858]: I0202 18:10:25.409662 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7gxhs/crc-debug-w4c2z" Feb 02 18:10:26 crc kubenswrapper[4858]: I0202 18:10:26.411462 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="577b3dd1-7c64-4413-81a1-d7705aa6393f" path="/var/lib/kubelet/pods/577b3dd1-7c64-4413-81a1-d7705aa6393f/volumes" Feb 02 18:10:34 crc kubenswrapper[4858]: I0202 18:10:34.404269 4858 scope.go:117] "RemoveContainer" containerID="a5e81828dcdebe8f323750ad14d8ce95e7cafffc8c6a9eb621ef0306363cb195" Feb 02 18:10:34 crc kubenswrapper[4858]: E0202 18:10:34.406179 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:10:39 crc kubenswrapper[4858]: I0202 18:10:39.414776 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-59c6cb6f96-ss676_8dbf9cae-d42c-47ae-b117-3fd56628b72f/barbican-api/0.log" Feb 02 18:10:39 crc kubenswrapper[4858]: I0202 18:10:39.596351 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-59c6cb6f96-ss676_8dbf9cae-d42c-47ae-b117-3fd56628b72f/barbican-api-log/0.log" Feb 02 18:10:39 crc kubenswrapper[4858]: I0202 18:10:39.678666 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-749df8c57d-rd7dc_08f67234-d648-4127-98d7-fcf00df7e1d3/barbican-keystone-listener/0.log" Feb 02 18:10:39 crc kubenswrapper[4858]: I0202 18:10:39.710659 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-749df8c57d-rd7dc_08f67234-d648-4127-98d7-fcf00df7e1d3/barbican-keystone-listener-log/0.log" Feb 02 18:10:39 crc kubenswrapper[4858]: I0202 18:10:39.820343 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-8b9496955-6bsmq_9cd8cddc-99bb-4e60-85e5-07d6090cfd49/barbican-worker/0.log" Feb 02 18:10:39 crc kubenswrapper[4858]: I0202 18:10:39.853504 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-8b9496955-6bsmq_9cd8cddc-99bb-4e60-85e5-07d6090cfd49/barbican-worker-log/0.log" Feb 02 18:10:40 crc kubenswrapper[4858]: I0202 18:10:40.059508 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-pr5qm_d0787e12-6645-4df3-8850-b9698b323f69/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 18:10:40 crc kubenswrapper[4858]: I0202 18:10:40.061629 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_32e8a9b4-688e-42b5-8562-23463e2632c1/ceilometer-central-agent/0.log" Feb 02 18:10:40 crc kubenswrapper[4858]: I0202 18:10:40.152701 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_32e8a9b4-688e-42b5-8562-23463e2632c1/ceilometer-notification-agent/0.log" Feb 02 18:10:40 crc kubenswrapper[4858]: I0202 18:10:40.235103 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_32e8a9b4-688e-42b5-8562-23463e2632c1/sg-core/0.log" Feb 02 18:10:40 crc kubenswrapper[4858]: I0202 18:10:40.285721 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_32e8a9b4-688e-42b5-8562-23463e2632c1/proxy-httpd/0.log" Feb 02 18:10:40 crc kubenswrapper[4858]: I0202 18:10:40.443762 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c8a1f97c-10b9-489f-9711-d6cd63f6e974/cinder-api-log/0.log" Feb 02 18:10:40 crc kubenswrapper[4858]: I0202 18:10:40.493815 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c8a1f97c-10b9-489f-9711-d6cd63f6e974/cinder-api/0.log" Feb 02 18:10:40 crc kubenswrapper[4858]: I0202 18:10:40.611671 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_34935d73-a8f5-4b92-83fc-734815dbb836/cinder-scheduler/0.log" Feb 02 18:10:40 crc kubenswrapper[4858]: I0202 18:10:40.697469 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_34935d73-a8f5-4b92-83fc-734815dbb836/probe/0.log" Feb 02 18:10:40 crc kubenswrapper[4858]: I0202 18:10:40.749463 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-6h2qq_07f60796-9efa-4245-955f-14c0c16c918d/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 18:10:40 crc kubenswrapper[4858]: I0202 18:10:40.906585 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-9qjdw_18853ae6-771f-43f8-a6e9-5501f381891d/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 18:10:40 crc kubenswrapper[4858]: I0202 18:10:40.956326 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-96qx9_435e285f-7731-45f3-8c96-282da49d50bf/init/0.log" Feb 02 18:10:41 crc kubenswrapper[4858]: I0202 18:10:41.133665 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-96qx9_435e285f-7731-45f3-8c96-282da49d50bf/init/0.log" Feb 02 18:10:41 crc kubenswrapper[4858]: I0202 18:10:41.183773 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-27k29_b94ab7ee-11a9-42ea-ae40-32926a53ed9a/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 18:10:41 crc kubenswrapper[4858]: I0202 18:10:41.191420 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-96qx9_435e285f-7731-45f3-8c96-282da49d50bf/dnsmasq-dns/0.log" Feb 02 18:10:41 crc kubenswrapper[4858]: I0202 18:10:41.382474 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_44559c36-6bc9-41d7-810f-f68bb1ed9d18/glance-log/0.log" Feb 02 18:10:41 crc kubenswrapper[4858]: I0202 18:10:41.455288 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_44559c36-6bc9-41d7-810f-f68bb1ed9d18/glance-httpd/0.log" Feb 02 18:10:41 crc kubenswrapper[4858]: I0202 18:10:41.567259 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_15e52f85-8dc6-46f7-8844-701c3e76839c/glance-log/0.log" Feb 02 18:10:41 crc kubenswrapper[4858]: I0202 18:10:41.646008 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_15e52f85-8dc6-46f7-8844-701c3e76839c/glance-httpd/0.log" Feb 02 18:10:41 crc kubenswrapper[4858]: I0202 18:10:41.777544 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-68f4b57796-rhdnw_4a208969-437b-449b-ba53-89364175a52a/horizon/0.log" Feb 02 18:10:41 crc kubenswrapper[4858]: I0202 18:10:41.861762 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-4gs56_c9df746d-9cca-49c2-88e3-8be52b5e9531/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 18:10:42 crc kubenswrapper[4858]: I0202 18:10:42.057024 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-68f4b57796-rhdnw_4a208969-437b-449b-ba53-89364175a52a/horizon-log/0.log" Feb 02 18:10:42 crc kubenswrapper[4858]: I0202 18:10:42.127119 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-65b4t_dbcce266-9b8e-489e-935d-17695dd8cf62/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 18:10:42 crc kubenswrapper[4858]: I0202 18:10:42.424423 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6fb4977965-lqqjm_e92af156-c3ae-4bdc-bf59-b07c51dbaef6/keystone-api/0.log" Feb 02 18:10:42 crc kubenswrapper[4858]: I0202 18:10:42.520162 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29500921-5cpwt_aedf0b15-0748-4ad7-afce-e421d046a585/keystone-cron/0.log" Feb 02 18:10:42 crc kubenswrapper[4858]: I0202 18:10:42.710476 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_25eaef2f-c235-44b2-847b-6d4a275f1c3d/kube-state-metrics/0.log" Feb 02 18:10:42 crc kubenswrapper[4858]: I0202 18:10:42.771261 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-bqcrc_2ab876f9-d750-4647-8212-6f9c4bee6eee/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 18:10:43 crc kubenswrapper[4858]: I0202 18:10:43.131756 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5765cfccfc-zqg5s_985d2863-cf61-4125-9842-28ec8706dea9/neutron-api/0.log" Feb 02 18:10:43 crc kubenswrapper[4858]: I0202 18:10:43.207107 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5765cfccfc-zqg5s_985d2863-cf61-4125-9842-28ec8706dea9/neutron-httpd/0.log" Feb 02 18:10:43 crc kubenswrapper[4858]: I0202 18:10:43.377967 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2_888cd580-fe65-443a-ac8f-351364f34183/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 18:10:43 crc kubenswrapper[4858]: I0202 18:10:43.933054 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_cf4ff043-2e61-44ec-a4ca-b93c524edf89/nova-cell0-conductor-conductor/0.log" Feb 02 18:10:43 crc kubenswrapper[4858]: I0202 18:10:43.941791 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_5402e6ff-48ec-47b2-b68e-3385e51ec388/nova-api-log/0.log" Feb 02 18:10:44 crc kubenswrapper[4858]: I0202 18:10:44.088927 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_5402e6ff-48ec-47b2-b68e-3385e51ec388/nova-api-api/0.log" Feb 02 18:10:44 crc kubenswrapper[4858]: I0202 18:10:44.194586 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_846f9c74-1b28-40d3-b2f9-ed7b380fa34f/nova-cell1-conductor-conductor/0.log" Feb 02 18:10:44 crc kubenswrapper[4858]: I0202 18:10:44.287760 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_319e3f38-af96-4ac6-9791-094f9a7d67ab/nova-cell1-novncproxy-novncproxy/0.log" Feb 02 18:10:44 crc kubenswrapper[4858]: I0202 18:10:44.461085 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-q9xnp_6b5342fc-b2c3-4a83-a74d-a49a34ac15a4/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 18:10:44 crc kubenswrapper[4858]: I0202 18:10:44.598880 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_52ad277d-ba1e-4129-b696-f4fa1a598d72/nova-metadata-log/0.log" Feb 02 18:10:44 crc kubenswrapper[4858]: I0202 18:10:44.948532 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_7215c3c5-9746-4192-b018-0c31b42cee4d/nova-scheduler-scheduler/0.log" Feb 02 18:10:44 crc kubenswrapper[4858]: I0202 18:10:44.988984 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_8a3a3fdc-3021-44f0-8520-da5a88cf03e1/mysql-bootstrap/0.log" Feb 02 18:10:45 crc kubenswrapper[4858]: I0202 18:10:45.366795 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_8a3a3fdc-3021-44f0-8520-da5a88cf03e1/mysql-bootstrap/0.log" Feb 02 18:10:45 crc kubenswrapper[4858]: I0202 18:10:45.368394 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_8a3a3fdc-3021-44f0-8520-da5a88cf03e1/galera/0.log" Feb 02 18:10:45 crc kubenswrapper[4858]: I0202 18:10:45.610625 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3a24f351-b5a8-444d-b67d-7b9635f5a8aa/mysql-bootstrap/0.log" Feb 02 18:10:45 crc kubenswrapper[4858]: I0202 18:10:45.798480 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3a24f351-b5a8-444d-b67d-7b9635f5a8aa/galera/0.log" Feb 02 18:10:45 crc kubenswrapper[4858]: I0202 18:10:45.808540 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3a24f351-b5a8-444d-b67d-7b9635f5a8aa/mysql-bootstrap/0.log" Feb 02 18:10:45 crc kubenswrapper[4858]: I0202 18:10:45.811211 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_52ad277d-ba1e-4129-b696-f4fa1a598d72/nova-metadata-metadata/0.log" Feb 02 18:10:46 crc kubenswrapper[4858]: I0202 18:10:46.059690 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_d0882d39-e033-4ce8-8b09-76d55e1c281c/openstackclient/0.log" Feb 02 18:10:46 crc kubenswrapper[4858]: I0202 18:10:46.069908 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-h6kmt_334dab9b-9793-4424-9c39-27eac5f07626/ovn-controller/0.log" Feb 02 18:10:46 crc kubenswrapper[4858]: I0202 18:10:46.264176 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-g5d9v_de17af80-1849-4a19-ae89-50057bc76aa3/openstack-network-exporter/0.log" Feb 02 18:10:46 crc kubenswrapper[4858]: I0202 18:10:46.298581 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tc4gv_77df6a52-36fd-44ea-b30e-33041ed49ed6/ovsdb-server-init/0.log" Feb 02 18:10:46 crc kubenswrapper[4858]: I0202 18:10:46.543843 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tc4gv_77df6a52-36fd-44ea-b30e-33041ed49ed6/ovs-vswitchd/0.log" Feb 02 18:10:46 crc kubenswrapper[4858]: I0202 18:10:46.554489 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tc4gv_77df6a52-36fd-44ea-b30e-33041ed49ed6/ovsdb-server/0.log" Feb 02 18:10:46 crc kubenswrapper[4858]: I0202 18:10:46.562722 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tc4gv_77df6a52-36fd-44ea-b30e-33041ed49ed6/ovsdb-server-init/0.log" Feb 02 18:10:46 crc kubenswrapper[4858]: I0202 18:10:46.974299 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-scsrh_d14bee68-7779-4c77-916e-a58d2a871918/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 18:10:46 crc kubenswrapper[4858]: I0202 18:10:46.986028 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_3c6b95f0-73a1-4b25-9905-2fa224e52142/openstack-network-exporter/0.log" Feb 02 18:10:47 crc kubenswrapper[4858]: I0202 18:10:47.091284 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_3c6b95f0-73a1-4b25-9905-2fa224e52142/ovn-northd/0.log" Feb 02 18:10:47 crc kubenswrapper[4858]: I0202 18:10:47.283103 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_10f1d4cf-2e13-41b0-b29a-f889e2acf0d0/openstack-network-exporter/0.log" Feb 02 18:10:47 crc kubenswrapper[4858]: I0202 18:10:47.360621 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_10f1d4cf-2e13-41b0-b29a-f889e2acf0d0/ovsdbserver-nb/0.log" Feb 02 18:10:47 crc kubenswrapper[4858]: I0202 18:10:47.527382 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a62694a3-fa2d-4765-ac02-3d19c4779d21/openstack-network-exporter/0.log" Feb 02 18:10:47 crc kubenswrapper[4858]: I0202 18:10:47.570732 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a62694a3-fa2d-4765-ac02-3d19c4779d21/ovsdbserver-sb/0.log" Feb 02 18:10:47 crc kubenswrapper[4858]: I0202 18:10:47.728010 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-b4fd7664d-fqkmq_113e6fbe-f0ce-497b-8a16-fb8bc217b584/placement-api/0.log" Feb 02 18:10:47 crc kubenswrapper[4858]: I0202 18:10:47.835706 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-b4fd7664d-fqkmq_113e6fbe-f0ce-497b-8a16-fb8bc217b584/placement-log/0.log" Feb 02 18:10:47 crc kubenswrapper[4858]: I0202 18:10:47.911919 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_09b56fe4-3166-4448-a186-95f3c74199f1/setup-container/0.log" Feb 02 18:10:48 crc kubenswrapper[4858]: I0202 18:10:48.149500 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_09b56fe4-3166-4448-a186-95f3c74199f1/setup-container/0.log" Feb 02 18:10:48 crc kubenswrapper[4858]: I0202 18:10:48.211441 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_09b56fe4-3166-4448-a186-95f3c74199f1/rabbitmq/0.log" Feb 02 18:10:48 crc kubenswrapper[4858]: I0202 18:10:48.239098 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f470a8b9-224f-436f-bbbb-c6ab6b1f587e/setup-container/0.log" Feb 02 18:10:48 crc kubenswrapper[4858]: I0202 18:10:48.387833 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f470a8b9-224f-436f-bbbb-c6ab6b1f587e/setup-container/0.log" Feb 02 18:10:48 crc kubenswrapper[4858]: I0202 18:10:48.458558 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f470a8b9-224f-436f-bbbb-c6ab6b1f587e/rabbitmq/0.log" Feb 02 18:10:48 crc kubenswrapper[4858]: I0202 18:10:48.501826 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-tvvsw_29249271-e3d7-41c6-8795-5c1b969161e0/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 18:10:48 crc kubenswrapper[4858]: I0202 18:10:48.673294 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-h7vtt_ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 18:10:48 crc kubenswrapper[4858]: I0202 18:10:48.801219 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-6hs2p_6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 18:10:48 crc kubenswrapper[4858]: I0202 18:10:48.929796 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-ztwkx_4a932e2b-79f7-41ef-b7e6-1e0789b67551/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 18:10:49 crc kubenswrapper[4858]: I0202 18:10:49.040770 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-2d55x_4ef22884-b1a4-454a-afa5-cde0aaa3439b/ssh-known-hosts-edpm-deployment/0.log" Feb 02 18:10:49 crc kubenswrapper[4858]: I0202 18:10:49.381058 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7748685595-fdxjj_6cdd18b7-595d-4635-9a17-32be92896da1/proxy-server/0.log" Feb 02 18:10:49 crc kubenswrapper[4858]: I0202 18:10:49.400222 4858 scope.go:117] "RemoveContainer" containerID="a5e81828dcdebe8f323750ad14d8ce95e7cafffc8c6a9eb621ef0306363cb195" Feb 02 18:10:49 crc kubenswrapper[4858]: E0202 18:10:49.400574 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:10:49 crc kubenswrapper[4858]: I0202 18:10:49.461808 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7748685595-fdxjj_6cdd18b7-595d-4635-9a17-32be92896da1/proxy-httpd/0.log" Feb 02 18:10:49 crc kubenswrapper[4858]: I0202 18:10:49.486120 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-44qfs_bf16bc74-b9cb-4774-b646-a4de84eb4dd9/swift-ring-rebalance/0.log" Feb 02 18:10:49 crc kubenswrapper[4858]: I0202 18:10:49.659487 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_703d6256-20d4-45fc-9a4c-ec6970ea250d/account-auditor/0.log" Feb 02 18:10:49 crc kubenswrapper[4858]: I0202 18:10:49.691620 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_703d6256-20d4-45fc-9a4c-ec6970ea250d/account-reaper/0.log" Feb 02 18:10:49 crc kubenswrapper[4858]: I0202 18:10:49.780207 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_703d6256-20d4-45fc-9a4c-ec6970ea250d/account-replicator/0.log" Feb 02 18:10:49 crc kubenswrapper[4858]: I0202 18:10:49.883154 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_703d6256-20d4-45fc-9a4c-ec6970ea250d/account-server/0.log" Feb 02 18:10:49 crc kubenswrapper[4858]: I0202 18:10:49.885244 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_703d6256-20d4-45fc-9a4c-ec6970ea250d/container-auditor/0.log" Feb 02 18:10:49 crc kubenswrapper[4858]: I0202 18:10:49.922347 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_703d6256-20d4-45fc-9a4c-ec6970ea250d/container-replicator/0.log" Feb 02 18:10:50 crc kubenswrapper[4858]: I0202 18:10:50.049517 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_703d6256-20d4-45fc-9a4c-ec6970ea250d/container-server/0.log" Feb 02 18:10:50 crc kubenswrapper[4858]: I0202 18:10:50.086570 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_703d6256-20d4-45fc-9a4c-ec6970ea250d/container-updater/0.log" Feb 02 18:10:50 crc kubenswrapper[4858]: I0202 18:10:50.197428 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_703d6256-20d4-45fc-9a4c-ec6970ea250d/object-expirer/0.log" Feb 02 18:10:50 crc kubenswrapper[4858]: I0202 18:10:50.213529 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_703d6256-20d4-45fc-9a4c-ec6970ea250d/object-auditor/0.log" Feb 02 18:10:50 crc kubenswrapper[4858]: I0202 18:10:50.303026 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_703d6256-20d4-45fc-9a4c-ec6970ea250d/object-replicator/0.log" Feb 02 18:10:50 crc kubenswrapper[4858]: I0202 18:10:50.355210 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_703d6256-20d4-45fc-9a4c-ec6970ea250d/object-server/0.log" Feb 02 18:10:50 crc kubenswrapper[4858]: I0202 18:10:50.445294 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_703d6256-20d4-45fc-9a4c-ec6970ea250d/object-updater/0.log" Feb 02 18:10:50 crc kubenswrapper[4858]: I0202 18:10:50.456845 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_703d6256-20d4-45fc-9a4c-ec6970ea250d/rsync/0.log" Feb 02 18:10:50 crc kubenswrapper[4858]: I0202 18:10:50.556446 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_703d6256-20d4-45fc-9a4c-ec6970ea250d/swift-recon-cron/0.log" Feb 02 18:10:50 crc kubenswrapper[4858]: I0202 18:10:50.908188 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-spthf_dd969e2b-6db6-4175-8fa3-7dfa60a198ca/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 18:10:51 crc kubenswrapper[4858]: I0202 18:10:51.011358 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52/tempest-tests-tempest-tests-runner/0.log" Feb 02 18:10:51 crc kubenswrapper[4858]: I0202 18:10:51.160149 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_3a3e6ddb-d991-4bf6-a248-b333da853203/test-operator-logs-container/0.log" Feb 02 18:10:51 crc kubenswrapper[4858]: I0202 18:10:51.270819 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-9g4rw_9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 18:10:58 crc kubenswrapper[4858]: I0202 18:10:58.894077 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_c386da2d-4b55-47da-aa8c-82b879ae7d3d/memcached/0.log" Feb 02 18:11:02 crc kubenswrapper[4858]: I0202 18:11:02.401849 4858 scope.go:117] "RemoveContainer" containerID="a5e81828dcdebe8f323750ad14d8ce95e7cafffc8c6a9eb621ef0306363cb195" Feb 02 18:11:02 crc kubenswrapper[4858]: E0202 18:11:02.403274 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:11:13 crc kubenswrapper[4858]: I0202 18:11:13.401927 4858 scope.go:117] "RemoveContainer" containerID="a5e81828dcdebe8f323750ad14d8ce95e7cafffc8c6a9eb621ef0306363cb195" Feb 02 18:11:13 crc kubenswrapper[4858]: E0202 18:11:13.412118 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:11:18 crc kubenswrapper[4858]: I0202 18:11:18.298804 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5_b4156898-70e8-4bdc-a254-49c0917d38dc/util/0.log" Feb 02 18:11:18 crc kubenswrapper[4858]: I0202 18:11:18.534340 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5_b4156898-70e8-4bdc-a254-49c0917d38dc/pull/0.log" Feb 02 18:11:18 crc kubenswrapper[4858]: I0202 18:11:18.544071 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5_b4156898-70e8-4bdc-a254-49c0917d38dc/pull/0.log" Feb 02 18:11:18 crc kubenswrapper[4858]: I0202 18:11:18.559910 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5_b4156898-70e8-4bdc-a254-49c0917d38dc/util/0.log" Feb 02 18:11:18 crc kubenswrapper[4858]: I0202 18:11:18.744624 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5_b4156898-70e8-4bdc-a254-49c0917d38dc/util/0.log" Feb 02 18:11:18 crc kubenswrapper[4858]: I0202 18:11:18.761739 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5_b4156898-70e8-4bdc-a254-49c0917d38dc/pull/0.log" Feb 02 18:11:18 crc kubenswrapper[4858]: I0202 18:11:18.780343 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5_b4156898-70e8-4bdc-a254-49c0917d38dc/extract/0.log" Feb 02 18:11:19 crc kubenswrapper[4858]: I0202 18:11:19.035254 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-r786j_a7c0be68-b4e3-47dc-b6c0-acd8878465ee/manager/0.log" Feb 02 18:11:19 crc kubenswrapper[4858]: I0202 18:11:19.071562 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-rgpmv_e61e293a-bb2a-4ccd-bc20-815cc2bfb01b/manager/0.log" Feb 02 18:11:19 crc kubenswrapper[4858]: I0202 18:11:19.249131 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-kcbss_f700cc0f-80eb-46a5-b7d3-b32dccdc2f49/manager/0.log" Feb 02 18:11:19 crc kubenswrapper[4858]: I0202 18:11:19.373642 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-p9qwv_ad1072ec-d0e8-49ff-9971-8f6589bde802/manager/0.log" Feb 02 18:11:19 crc kubenswrapper[4858]: I0202 18:11:19.470085 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-99rfw_76ec111a-d121-411c-9d81-8fcfd6323d49/manager/0.log" Feb 02 18:11:19 crc kubenswrapper[4858]: I0202 18:11:19.556202 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-kcgxf_8eca62a8-4909-4402-89ff-bd59ad42daef/manager/0.log" Feb 02 18:11:19 crc kubenswrapper[4858]: I0202 18:11:19.811349 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-kffpf_44678b87-d59f-4661-93c9-8e2ddb8ea61e/manager/0.log" Feb 02 18:11:19 crc kubenswrapper[4858]: I0202 18:11:19.905427 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-ck77w_5b2eeae9-b158-4d59-8056-b12e1a397d18/manager/0.log" Feb 02 18:11:20 crc kubenswrapper[4858]: I0202 18:11:20.028462 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-rmkrp_8c70d2b3-c4e9-422f-ace6-f11450c068ec/manager/0.log" Feb 02 18:11:20 crc kubenswrapper[4858]: I0202 18:11:20.106166 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-7cq6h_096752c5-391b-4370-b5f6-39ef63d6878e/manager/0.log" Feb 02 18:11:20 crc kubenswrapper[4858]: I0202 18:11:20.500836 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-2p2q6_2600f62e-5615-4217-9629-9b77846634f9/manager/0.log" Feb 02 18:11:20 crc kubenswrapper[4858]: I0202 18:11:20.582692 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-vpbp7_f5578b04-55cc-4bb9-a3f5-27e63ffe0c27/manager/0.log" Feb 02 18:11:20 crc kubenswrapper[4858]: I0202 18:11:20.792272 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-2g5g6_b6d0d2c9-a689-4bcf-b3c8-b8aa25e47898/manager/0.log" Feb 02 18:11:20 crc kubenswrapper[4858]: I0202 18:11:20.918886 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-tfvcz_405115c4-bd24-4b05-b437-a8a27bc1f2b5/manager/0.log" Feb 02 18:11:21 crc kubenswrapper[4858]: I0202 18:11:21.032753 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dsvmkf_00e707da-7230-4214-82a0-e1b18aad70a8/manager/0.log" Feb 02 18:11:21 crc kubenswrapper[4858]: I0202 18:11:21.304948 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7b85844457-9n8fp_332ff13e-699a-4582-873c-073c20cb6ca0/operator/0.log" Feb 02 18:11:21 crc kubenswrapper[4858]: I0202 18:11:21.492300 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-6cl4c_b123c5f8-831f-41b2-a1d0-fcde62501499/registry-server/0.log" Feb 02 18:11:21 crc kubenswrapper[4858]: I0202 18:11:21.715501 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-8r9sc_80f2567c-89d7-4350-a7f2-acd472bc2f68/manager/0.log" Feb 02 18:11:21 crc kubenswrapper[4858]: I0202 18:11:21.905088 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-srbzf_16a9ca97-2b15-4a52-8d2c-eb170a3f2b75/manager/0.log" Feb 02 18:11:22 crc kubenswrapper[4858]: I0202 18:11:22.056472 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-8lqf9_3733a396-b067-4153-891a-1c5b044a7e04/operator/0.log" Feb 02 18:11:22 crc kubenswrapper[4858]: I0202 18:11:22.319600 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-zlfqp_4a72e4a0-8e70-4d04-85c8-15b68840632d/manager/0.log" Feb 02 18:11:22 crc kubenswrapper[4858]: I0202 18:11:22.531722 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-86df59f79f-rczsp_ad13cd52-7254-489a-8960-511bbc2a3360/manager/0.log" Feb 02 18:11:22 crc kubenswrapper[4858]: I0202 18:11:22.532469 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-4jd6l_366ee9f4-9c6e-416a-8603-f6bac0530a6a/manager/0.log" Feb 02 18:11:22 crc kubenswrapper[4858]: I0202 18:11:22.580056 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-z7cp9_da72a0b3-6998-4d0e-b7d3-f4fce5f11f1b/manager/0.log" Feb 02 18:11:22 crc kubenswrapper[4858]: I0202 18:11:22.760570 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-rl5z2_467af09f-e1d2-407e-989e-606a3a3219b0/manager/0.log" Feb 02 18:11:28 crc kubenswrapper[4858]: I0202 18:11:28.400761 4858 scope.go:117] "RemoveContainer" containerID="a5e81828dcdebe8f323750ad14d8ce95e7cafffc8c6a9eb621ef0306363cb195" Feb 02 18:11:28 crc kubenswrapper[4858]: E0202 18:11:28.401647 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:11:40 crc kubenswrapper[4858]: I0202 18:11:40.409517 4858 scope.go:117] "RemoveContainer" containerID="a5e81828dcdebe8f323750ad14d8ce95e7cafffc8c6a9eb621ef0306363cb195" Feb 02 18:11:40 crc kubenswrapper[4858]: E0202 18:11:40.411680 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:11:43 crc kubenswrapper[4858]: I0202 18:11:43.121313 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-hgpp5_e330b41c-dacd-4c4b-a013-dd16a913ac54/control-plane-machine-set-operator/0.log" Feb 02 18:11:43 crc kubenswrapper[4858]: I0202 18:11:43.305371 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-ssvjj_f4d39c6c-15e3-48a3-82be-2bc3703dbc7f/kube-rbac-proxy/0.log" Feb 02 18:11:43 crc kubenswrapper[4858]: I0202 18:11:43.362120 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-ssvjj_f4d39c6c-15e3-48a3-82be-2bc3703dbc7f/machine-api-operator/0.log" Feb 02 18:11:53 crc kubenswrapper[4858]: I0202 18:11:53.400523 4858 scope.go:117] "RemoveContainer" containerID="a5e81828dcdebe8f323750ad14d8ce95e7cafffc8c6a9eb621ef0306363cb195" Feb 02 18:11:53 crc kubenswrapper[4858]: E0202 18:11:53.402581 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:11:54 crc kubenswrapper[4858]: I0202 18:11:54.957644 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-dzxc6_bc586ae0-865f-490b-8ca0-bb157144af30/cert-manager-controller/0.log" Feb 02 18:11:55 crc kubenswrapper[4858]: I0202 18:11:55.178913 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-7kj75_b09a6151-2124-4f22-b226-a1ae36869433/cert-manager-cainjector/0.log" Feb 02 18:11:55 crc kubenswrapper[4858]: I0202 18:11:55.236754 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-bhlzx_786fe412-07f2-458a-bb89-f77dc747524c/cert-manager-webhook/0.log" Feb 02 18:12:07 crc kubenswrapper[4858]: I0202 18:12:07.401079 4858 scope.go:117] "RemoveContainer" containerID="a5e81828dcdebe8f323750ad14d8ce95e7cafffc8c6a9eb621ef0306363cb195" Feb 02 18:12:08 crc kubenswrapper[4858]: I0202 18:12:08.090362 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-f8nz8_a69645ff-c03c-4296-aa6a-63cd14095040/nmstate-console-plugin/0.log" Feb 02 18:12:08 crc kubenswrapper[4858]: I0202 18:12:08.318656 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-9rsf6_35bb0d37-e388-42c3-ad03-2cbb0e4a9409/nmstate-handler/0.log" Feb 02 18:12:08 crc kubenswrapper[4858]: I0202 18:12:08.372847 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-8fldl_cdee88da-b22d-4fe4-98a2-a53cadedb993/kube-rbac-proxy/0.log" Feb 02 18:12:08 crc kubenswrapper[4858]: I0202 18:12:08.381760 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" event={"ID":"d03a4872-ca6a-4233-bdbf-b31f7890dc3e","Type":"ContainerStarted","Data":"0407b826940258d2b90ce9df3d656cf4cd038bfd8d47c76fe3dfc58f88e9b7c6"} Feb 02 18:12:08 crc kubenswrapper[4858]: I0202 18:12:08.460868 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-8fldl_cdee88da-b22d-4fe4-98a2-a53cadedb993/nmstate-metrics/0.log" Feb 02 18:12:08 crc kubenswrapper[4858]: I0202 18:12:08.539402 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-pmkq5_f3603e7c-ff14-4deb-a9d8-e5751a729be6/nmstate-operator/0.log" Feb 02 18:12:08 crc kubenswrapper[4858]: I0202 18:12:08.671776 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-t9gvb_11353839-2688-4112-a9d9-87bead34c26a/nmstate-webhook/0.log" Feb 02 18:12:34 crc kubenswrapper[4858]: I0202 18:12:34.191520 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-6bvx9_0b58bf4d-52bb-4876-8555-b8b403e0cbcb/kube-rbac-proxy/0.log" Feb 02 18:12:34 crc kubenswrapper[4858]: I0202 18:12:34.283061 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-6bvx9_0b58bf4d-52bb-4876-8555-b8b403e0cbcb/controller/0.log" Feb 02 18:12:34 crc kubenswrapper[4858]: I0202 18:12:34.625229 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svlq2_aa24ded3-4a92-4c89-bade-68547bdca597/cp-frr-files/0.log" Feb 02 18:12:34 crc kubenswrapper[4858]: I0202 18:12:34.760105 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svlq2_aa24ded3-4a92-4c89-bade-68547bdca597/cp-frr-files/0.log" Feb 02 18:12:34 crc kubenswrapper[4858]: I0202 18:12:34.777829 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svlq2_aa24ded3-4a92-4c89-bade-68547bdca597/cp-reloader/0.log" Feb 02 18:12:34 crc kubenswrapper[4858]: I0202 18:12:34.851126 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svlq2_aa24ded3-4a92-4c89-bade-68547bdca597/cp-reloader/0.log" Feb 02 18:12:34 crc kubenswrapper[4858]: I0202 18:12:34.853760 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svlq2_aa24ded3-4a92-4c89-bade-68547bdca597/cp-metrics/0.log" Feb 02 18:12:35 crc kubenswrapper[4858]: I0202 18:12:35.048541 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svlq2_aa24ded3-4a92-4c89-bade-68547bdca597/cp-frr-files/0.log" Feb 02 18:12:35 crc kubenswrapper[4858]: I0202 18:12:35.060532 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svlq2_aa24ded3-4a92-4c89-bade-68547bdca597/cp-metrics/0.log" Feb 02 18:12:35 crc kubenswrapper[4858]: I0202 18:12:35.064835 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svlq2_aa24ded3-4a92-4c89-bade-68547bdca597/cp-metrics/0.log" Feb 02 18:12:35 crc kubenswrapper[4858]: I0202 18:12:35.067452 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svlq2_aa24ded3-4a92-4c89-bade-68547bdca597/cp-reloader/0.log" Feb 02 18:12:35 crc kubenswrapper[4858]: I0202 18:12:35.260723 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svlq2_aa24ded3-4a92-4c89-bade-68547bdca597/cp-frr-files/0.log" Feb 02 18:12:35 crc kubenswrapper[4858]: I0202 18:12:35.268149 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svlq2_aa24ded3-4a92-4c89-bade-68547bdca597/cp-metrics/0.log" Feb 02 18:12:35 crc kubenswrapper[4858]: I0202 18:12:35.294005 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svlq2_aa24ded3-4a92-4c89-bade-68547bdca597/cp-reloader/0.log" Feb 02 18:12:35 crc kubenswrapper[4858]: I0202 18:12:35.304756 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svlq2_aa24ded3-4a92-4c89-bade-68547bdca597/controller/0.log" Feb 02 18:12:35 crc kubenswrapper[4858]: I0202 18:12:35.451738 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svlq2_aa24ded3-4a92-4c89-bade-68547bdca597/frr-metrics/0.log" Feb 02 18:12:35 crc kubenswrapper[4858]: I0202 18:12:35.517017 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svlq2_aa24ded3-4a92-4c89-bade-68547bdca597/kube-rbac-proxy/0.log" Feb 02 18:12:35 crc kubenswrapper[4858]: I0202 18:12:35.531990 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svlq2_aa24ded3-4a92-4c89-bade-68547bdca597/kube-rbac-proxy-frr/0.log" Feb 02 18:12:35 crc kubenswrapper[4858]: I0202 18:12:35.673740 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svlq2_aa24ded3-4a92-4c89-bade-68547bdca597/reloader/0.log" Feb 02 18:12:35 crc kubenswrapper[4858]: I0202 18:12:35.764167 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-pgxzh_1226b394-7ee5-4947-8d99-532106bb7baa/frr-k8s-webhook-server/0.log" Feb 02 18:12:36 crc kubenswrapper[4858]: I0202 18:12:36.029530 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-749875bd8b-wr4x9_3ba8c286-d0ce-40d1-b759-9d983474210b/manager/0.log" Feb 02 18:12:36 crc kubenswrapper[4858]: I0202 18:12:36.136928 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5854c4649f-zl8j4_5efe8813-bcae-42c6-be1a-6f60809e7e3e/webhook-server/0.log" Feb 02 18:12:36 crc kubenswrapper[4858]: I0202 18:12:36.270289 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-dk7fw_c72561ce-1db8-4883-97fe-488222b2f232/kube-rbac-proxy/0.log" Feb 02 18:12:36 crc kubenswrapper[4858]: I0202 18:12:36.774529 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svlq2_aa24ded3-4a92-4c89-bade-68547bdca597/frr/0.log" Feb 02 18:12:36 crc kubenswrapper[4858]: I0202 18:12:36.798150 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-dk7fw_c72561ce-1db8-4883-97fe-488222b2f232/speaker/0.log" Feb 02 18:12:48 crc kubenswrapper[4858]: I0202 18:12:48.129368 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmhk8m_10d6ebd8-c224-43f2-b27c-bb5944ad819d/util/0.log" Feb 02 18:12:48 crc kubenswrapper[4858]: I0202 18:12:48.370814 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmhk8m_10d6ebd8-c224-43f2-b27c-bb5944ad819d/util/0.log" Feb 02 18:12:48 crc kubenswrapper[4858]: I0202 18:12:48.379780 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmhk8m_10d6ebd8-c224-43f2-b27c-bb5944ad819d/pull/0.log" Feb 02 18:12:48 crc kubenswrapper[4858]: I0202 18:12:48.389344 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmhk8m_10d6ebd8-c224-43f2-b27c-bb5944ad819d/pull/0.log" Feb 02 18:12:48 crc kubenswrapper[4858]: I0202 18:12:48.557477 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmhk8m_10d6ebd8-c224-43f2-b27c-bb5944ad819d/util/0.log" Feb 02 18:12:48 crc kubenswrapper[4858]: I0202 18:12:48.557678 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmhk8m_10d6ebd8-c224-43f2-b27c-bb5944ad819d/pull/0.log" Feb 02 18:12:48 crc kubenswrapper[4858]: I0202 18:12:48.597351 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmhk8m_10d6ebd8-c224-43f2-b27c-bb5944ad819d/extract/0.log" Feb 02 18:12:48 crc kubenswrapper[4858]: I0202 18:12:48.733395 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kmq27_9c43bf8c-3e4f-4983-a524-7033f240b2f7/util/0.log" Feb 02 18:12:48 crc kubenswrapper[4858]: I0202 18:12:48.917595 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kmq27_9c43bf8c-3e4f-4983-a524-7033f240b2f7/util/0.log" Feb 02 18:12:48 crc kubenswrapper[4858]: I0202 18:12:48.921230 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kmq27_9c43bf8c-3e4f-4983-a524-7033f240b2f7/pull/0.log" Feb 02 18:12:48 crc kubenswrapper[4858]: I0202 18:12:48.921595 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kmq27_9c43bf8c-3e4f-4983-a524-7033f240b2f7/pull/0.log" Feb 02 18:12:49 crc kubenswrapper[4858]: I0202 18:12:49.087370 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kmq27_9c43bf8c-3e4f-4983-a524-7033f240b2f7/util/0.log" Feb 02 18:12:49 crc kubenswrapper[4858]: I0202 18:12:49.141773 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kmq27_9c43bf8c-3e4f-4983-a524-7033f240b2f7/pull/0.log" Feb 02 18:12:49 crc kubenswrapper[4858]: I0202 18:12:49.141993 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kmq27_9c43bf8c-3e4f-4983-a524-7033f240b2f7/extract/0.log" Feb 02 18:12:49 crc kubenswrapper[4858]: I0202 18:12:49.265643 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q6b5c_05b84894-e183-4874-8ca5-002436026fce/extract-utilities/0.log" Feb 02 18:12:49 crc kubenswrapper[4858]: I0202 18:12:49.472024 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q6b5c_05b84894-e183-4874-8ca5-002436026fce/extract-utilities/0.log" Feb 02 18:12:49 crc kubenswrapper[4858]: I0202 18:12:49.472046 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q6b5c_05b84894-e183-4874-8ca5-002436026fce/extract-content/0.log" Feb 02 18:12:49 crc kubenswrapper[4858]: I0202 18:12:49.483681 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q6b5c_05b84894-e183-4874-8ca5-002436026fce/extract-content/0.log" Feb 02 18:12:49 crc kubenswrapper[4858]: I0202 18:12:49.662191 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q6b5c_05b84894-e183-4874-8ca5-002436026fce/extract-utilities/0.log" Feb 02 18:12:49 crc kubenswrapper[4858]: I0202 18:12:49.664109 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q6b5c_05b84894-e183-4874-8ca5-002436026fce/extract-content/0.log" Feb 02 18:12:49 crc kubenswrapper[4858]: I0202 18:12:49.922158 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-phnlb_c1397f76-ca47-41cd-860f-4ecb3e5856fb/extract-utilities/0.log" Feb 02 18:12:50 crc kubenswrapper[4858]: I0202 18:12:50.065744 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-phnlb_c1397f76-ca47-41cd-860f-4ecb3e5856fb/extract-utilities/0.log" Feb 02 18:12:50 crc kubenswrapper[4858]: I0202 18:12:50.143740 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-phnlb_c1397f76-ca47-41cd-860f-4ecb3e5856fb/extract-content/0.log" Feb 02 18:12:50 crc kubenswrapper[4858]: I0202 18:12:50.149257 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q6b5c_05b84894-e183-4874-8ca5-002436026fce/registry-server/0.log" Feb 02 18:12:50 crc kubenswrapper[4858]: I0202 18:12:50.171704 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-phnlb_c1397f76-ca47-41cd-860f-4ecb3e5856fb/extract-content/0.log" Feb 02 18:12:50 crc kubenswrapper[4858]: I0202 18:12:50.380382 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-phnlb_c1397f76-ca47-41cd-860f-4ecb3e5856fb/extract-utilities/0.log" Feb 02 18:12:50 crc kubenswrapper[4858]: I0202 18:12:50.381852 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-phnlb_c1397f76-ca47-41cd-860f-4ecb3e5856fb/extract-content/0.log" Feb 02 18:12:50 crc kubenswrapper[4858]: I0202 18:12:50.626540 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-g7r5b_49416635-c370-4a58-aa72-0c1d52fab5f3/marketplace-operator/0.log" Feb 02 18:12:50 crc kubenswrapper[4858]: I0202 18:12:50.671773 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c22x2_852607b1-d3cd-4688-a469-872ae6c5e98d/extract-utilities/0.log" Feb 02 18:12:50 crc kubenswrapper[4858]: I0202 18:12:50.891802 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c22x2_852607b1-d3cd-4688-a469-872ae6c5e98d/extract-content/0.log" Feb 02 18:12:50 crc kubenswrapper[4858]: I0202 18:12:50.905501 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-phnlb_c1397f76-ca47-41cd-860f-4ecb3e5856fb/registry-server/0.log" Feb 02 18:12:50 crc kubenswrapper[4858]: I0202 18:12:50.922803 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c22x2_852607b1-d3cd-4688-a469-872ae6c5e98d/extract-content/0.log" Feb 02 18:12:50 crc kubenswrapper[4858]: I0202 18:12:50.954311 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c22x2_852607b1-d3cd-4688-a469-872ae6c5e98d/extract-utilities/0.log" Feb 02 18:12:51 crc kubenswrapper[4858]: I0202 18:12:51.124509 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c22x2_852607b1-d3cd-4688-a469-872ae6c5e98d/extract-content/0.log" Feb 02 18:12:51 crc kubenswrapper[4858]: I0202 18:12:51.145752 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c22x2_852607b1-d3cd-4688-a469-872ae6c5e98d/extract-utilities/0.log" Feb 02 18:12:51 crc kubenswrapper[4858]: I0202 18:12:51.276429 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c22x2_852607b1-d3cd-4688-a469-872ae6c5e98d/registry-server/0.log" Feb 02 18:12:51 crc kubenswrapper[4858]: I0202 18:12:51.403034 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-llwt8_454bd674-ee00-420b-9910-16fe062ea116/extract-utilities/0.log" Feb 02 18:12:51 crc kubenswrapper[4858]: I0202 18:12:51.613397 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-llwt8_454bd674-ee00-420b-9910-16fe062ea116/extract-content/0.log" Feb 02 18:12:51 crc kubenswrapper[4858]: I0202 18:12:51.634682 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-llwt8_454bd674-ee00-420b-9910-16fe062ea116/extract-content/0.log" Feb 02 18:12:51 crc kubenswrapper[4858]: I0202 18:12:51.661011 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-llwt8_454bd674-ee00-420b-9910-16fe062ea116/extract-utilities/0.log" Feb 02 18:12:51 crc kubenswrapper[4858]: I0202 18:12:51.848513 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-llwt8_454bd674-ee00-420b-9910-16fe062ea116/extract-content/0.log" Feb 02 18:12:51 crc kubenswrapper[4858]: I0202 18:12:51.878551 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-llwt8_454bd674-ee00-420b-9910-16fe062ea116/extract-utilities/0.log" Feb 02 18:12:52 crc kubenswrapper[4858]: I0202 18:12:52.313325 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-llwt8_454bd674-ee00-420b-9910-16fe062ea116/registry-server/0.log" Feb 02 18:13:13 crc kubenswrapper[4858]: I0202 18:13:13.861710 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mrr7j"] Feb 02 18:13:13 crc kubenswrapper[4858]: E0202 18:13:13.863803 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="577b3dd1-7c64-4413-81a1-d7705aa6393f" containerName="container-00" Feb 02 18:13:13 crc kubenswrapper[4858]: I0202 18:13:13.863925 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="577b3dd1-7c64-4413-81a1-d7705aa6393f" containerName="container-00" Feb 02 18:13:13 crc kubenswrapper[4858]: I0202 18:13:13.864450 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="577b3dd1-7c64-4413-81a1-d7705aa6393f" containerName="container-00" Feb 02 18:13:13 crc kubenswrapper[4858]: I0202 18:13:13.866409 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mrr7j" Feb 02 18:13:13 crc kubenswrapper[4858]: I0202 18:13:13.874138 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2080a357-c371-422d-b95d-a830f59d4838-catalog-content\") pod \"community-operators-mrr7j\" (UID: \"2080a357-c371-422d-b95d-a830f59d4838\") " pod="openshift-marketplace/community-operators-mrr7j" Feb 02 18:13:13 crc kubenswrapper[4858]: I0202 18:13:13.874344 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szz65\" (UniqueName: \"kubernetes.io/projected/2080a357-c371-422d-b95d-a830f59d4838-kube-api-access-szz65\") pod \"community-operators-mrr7j\" (UID: \"2080a357-c371-422d-b95d-a830f59d4838\") " pod="openshift-marketplace/community-operators-mrr7j" Feb 02 18:13:13 crc kubenswrapper[4858]: I0202 18:13:13.874440 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2080a357-c371-422d-b95d-a830f59d4838-utilities\") pod \"community-operators-mrr7j\" (UID: \"2080a357-c371-422d-b95d-a830f59d4838\") " pod="openshift-marketplace/community-operators-mrr7j" Feb 02 18:13:13 crc kubenswrapper[4858]: I0202 18:13:13.875868 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mrr7j"] Feb 02 18:13:13 crc kubenswrapper[4858]: I0202 18:13:13.978143 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szz65\" (UniqueName: \"kubernetes.io/projected/2080a357-c371-422d-b95d-a830f59d4838-kube-api-access-szz65\") pod \"community-operators-mrr7j\" (UID: \"2080a357-c371-422d-b95d-a830f59d4838\") " pod="openshift-marketplace/community-operators-mrr7j" Feb 02 18:13:13 crc kubenswrapper[4858]: I0202 18:13:13.978789 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2080a357-c371-422d-b95d-a830f59d4838-utilities\") pod \"community-operators-mrr7j\" (UID: \"2080a357-c371-422d-b95d-a830f59d4838\") " pod="openshift-marketplace/community-operators-mrr7j" Feb 02 18:13:13 crc kubenswrapper[4858]: I0202 18:13:13.979175 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2080a357-c371-422d-b95d-a830f59d4838-catalog-content\") pod \"community-operators-mrr7j\" (UID: \"2080a357-c371-422d-b95d-a830f59d4838\") " pod="openshift-marketplace/community-operators-mrr7j" Feb 02 18:13:13 crc kubenswrapper[4858]: I0202 18:13:13.979709 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2080a357-c371-422d-b95d-a830f59d4838-utilities\") pod \"community-operators-mrr7j\" (UID: \"2080a357-c371-422d-b95d-a830f59d4838\") " pod="openshift-marketplace/community-operators-mrr7j" Feb 02 18:13:13 crc kubenswrapper[4858]: I0202 18:13:13.979884 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2080a357-c371-422d-b95d-a830f59d4838-catalog-content\") pod \"community-operators-mrr7j\" (UID: \"2080a357-c371-422d-b95d-a830f59d4838\") " pod="openshift-marketplace/community-operators-mrr7j" Feb 02 18:13:14 crc kubenswrapper[4858]: I0202 18:13:14.016808 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szz65\" (UniqueName: \"kubernetes.io/projected/2080a357-c371-422d-b95d-a830f59d4838-kube-api-access-szz65\") pod \"community-operators-mrr7j\" (UID: \"2080a357-c371-422d-b95d-a830f59d4838\") " pod="openshift-marketplace/community-operators-mrr7j" Feb 02 18:13:14 crc kubenswrapper[4858]: I0202 18:13:14.193314 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mrr7j" Feb 02 18:13:14 crc kubenswrapper[4858]: I0202 18:13:14.702108 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mrr7j"] Feb 02 18:13:15 crc kubenswrapper[4858]: I0202 18:13:15.004175 4858 generic.go:334] "Generic (PLEG): container finished" podID="2080a357-c371-422d-b95d-a830f59d4838" containerID="3dc2c27d900ec44d5b79922b601a67887d66617baea66cc0d24643feee5886d2" exitCode=0 Feb 02 18:13:15 crc kubenswrapper[4858]: I0202 18:13:15.004224 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mrr7j" event={"ID":"2080a357-c371-422d-b95d-a830f59d4838","Type":"ContainerDied","Data":"3dc2c27d900ec44d5b79922b601a67887d66617baea66cc0d24643feee5886d2"} Feb 02 18:13:15 crc kubenswrapper[4858]: I0202 18:13:15.004481 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mrr7j" event={"ID":"2080a357-c371-422d-b95d-a830f59d4838","Type":"ContainerStarted","Data":"1a6bb3d1977c6d71398d323f79255b0c721b51f3b0e20a8499afcb8fb0584f69"} Feb 02 18:13:16 crc kubenswrapper[4858]: I0202 18:13:16.015092 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mrr7j" event={"ID":"2080a357-c371-422d-b95d-a830f59d4838","Type":"ContainerStarted","Data":"f3add9d27c92e4395d689c9a5be56246e7166ce66a96334c7454ddd62cad28bd"} Feb 02 18:13:17 crc kubenswrapper[4858]: I0202 18:13:17.027571 4858 generic.go:334] "Generic (PLEG): container finished" podID="2080a357-c371-422d-b95d-a830f59d4838" containerID="f3add9d27c92e4395d689c9a5be56246e7166ce66a96334c7454ddd62cad28bd" exitCode=0 Feb 02 18:13:17 crc kubenswrapper[4858]: I0202 18:13:17.027934 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mrr7j" event={"ID":"2080a357-c371-422d-b95d-a830f59d4838","Type":"ContainerDied","Data":"f3add9d27c92e4395d689c9a5be56246e7166ce66a96334c7454ddd62cad28bd"} Feb 02 18:13:18 crc kubenswrapper[4858]: I0202 18:13:18.052326 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mrr7j" event={"ID":"2080a357-c371-422d-b95d-a830f59d4838","Type":"ContainerStarted","Data":"f105383399789a73ba7f399e50c7d59c21a9f66e436a1abce0a34093be97e700"} Feb 02 18:13:18 crc kubenswrapper[4858]: I0202 18:13:18.092501 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mrr7j" podStartSLOduration=2.41378796 podStartE2EDuration="5.09247585s" podCreationTimestamp="2026-02-02 18:13:13 +0000 UTC" firstStartedPulling="2026-02-02 18:13:15.015962808 +0000 UTC m=+3496.168378073" lastFinishedPulling="2026-02-02 18:13:17.694650698 +0000 UTC m=+3498.847065963" observedRunningTime="2026-02-02 18:13:18.084246425 +0000 UTC m=+3499.236661690" watchObservedRunningTime="2026-02-02 18:13:18.09247585 +0000 UTC m=+3499.244891125" Feb 02 18:13:24 crc kubenswrapper[4858]: I0202 18:13:24.193496 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mrr7j" Feb 02 18:13:24 crc kubenswrapper[4858]: I0202 18:13:24.194148 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mrr7j" Feb 02 18:13:24 crc kubenswrapper[4858]: I0202 18:13:24.252556 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mrr7j" Feb 02 18:13:25 crc kubenswrapper[4858]: I0202 18:13:25.161793 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mrr7j" Feb 02 18:13:25 crc kubenswrapper[4858]: I0202 18:13:25.226085 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mrr7j"] Feb 02 18:13:27 crc kubenswrapper[4858]: I0202 18:13:27.127388 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mrr7j" podUID="2080a357-c371-422d-b95d-a830f59d4838" containerName="registry-server" containerID="cri-o://f105383399789a73ba7f399e50c7d59c21a9f66e436a1abce0a34093be97e700" gracePeriod=2 Feb 02 18:13:27 crc kubenswrapper[4858]: I0202 18:13:27.615193 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mrr7j" Feb 02 18:13:27 crc kubenswrapper[4858]: I0202 18:13:27.680647 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2080a357-c371-422d-b95d-a830f59d4838-utilities\") pod \"2080a357-c371-422d-b95d-a830f59d4838\" (UID: \"2080a357-c371-422d-b95d-a830f59d4838\") " Feb 02 18:13:27 crc kubenswrapper[4858]: I0202 18:13:27.680728 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szz65\" (UniqueName: \"kubernetes.io/projected/2080a357-c371-422d-b95d-a830f59d4838-kube-api-access-szz65\") pod \"2080a357-c371-422d-b95d-a830f59d4838\" (UID: \"2080a357-c371-422d-b95d-a830f59d4838\") " Feb 02 18:13:27 crc kubenswrapper[4858]: I0202 18:13:27.680817 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2080a357-c371-422d-b95d-a830f59d4838-catalog-content\") pod \"2080a357-c371-422d-b95d-a830f59d4838\" (UID: \"2080a357-c371-422d-b95d-a830f59d4838\") " Feb 02 18:13:27 crc kubenswrapper[4858]: I0202 18:13:27.681913 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2080a357-c371-422d-b95d-a830f59d4838-utilities" (OuterVolumeSpecName: "utilities") pod "2080a357-c371-422d-b95d-a830f59d4838" (UID: "2080a357-c371-422d-b95d-a830f59d4838"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 18:13:27 crc kubenswrapper[4858]: I0202 18:13:27.694293 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2080a357-c371-422d-b95d-a830f59d4838-kube-api-access-szz65" (OuterVolumeSpecName: "kube-api-access-szz65") pod "2080a357-c371-422d-b95d-a830f59d4838" (UID: "2080a357-c371-422d-b95d-a830f59d4838"). InnerVolumeSpecName "kube-api-access-szz65". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 18:13:27 crc kubenswrapper[4858]: I0202 18:13:27.737870 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2080a357-c371-422d-b95d-a830f59d4838-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2080a357-c371-422d-b95d-a830f59d4838" (UID: "2080a357-c371-422d-b95d-a830f59d4838"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 18:13:27 crc kubenswrapper[4858]: I0202 18:13:27.784623 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2080a357-c371-422d-b95d-a830f59d4838-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 18:13:27 crc kubenswrapper[4858]: I0202 18:13:27.784669 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szz65\" (UniqueName: \"kubernetes.io/projected/2080a357-c371-422d-b95d-a830f59d4838-kube-api-access-szz65\") on node \"crc\" DevicePath \"\"" Feb 02 18:13:27 crc kubenswrapper[4858]: I0202 18:13:27.784684 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2080a357-c371-422d-b95d-a830f59d4838-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 18:13:28 crc kubenswrapper[4858]: I0202 18:13:28.153934 4858 generic.go:334] "Generic (PLEG): container finished" podID="2080a357-c371-422d-b95d-a830f59d4838" containerID="f105383399789a73ba7f399e50c7d59c21a9f66e436a1abce0a34093be97e700" exitCode=0 Feb 02 18:13:28 crc kubenswrapper[4858]: I0202 18:13:28.154034 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mrr7j" Feb 02 18:13:28 crc kubenswrapper[4858]: I0202 18:13:28.154016 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mrr7j" event={"ID":"2080a357-c371-422d-b95d-a830f59d4838","Type":"ContainerDied","Data":"f105383399789a73ba7f399e50c7d59c21a9f66e436a1abce0a34093be97e700"} Feb 02 18:13:28 crc kubenswrapper[4858]: I0202 18:13:28.155515 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mrr7j" event={"ID":"2080a357-c371-422d-b95d-a830f59d4838","Type":"ContainerDied","Data":"1a6bb3d1977c6d71398d323f79255b0c721b51f3b0e20a8499afcb8fb0584f69"} Feb 02 18:13:28 crc kubenswrapper[4858]: I0202 18:13:28.155614 4858 scope.go:117] "RemoveContainer" containerID="f105383399789a73ba7f399e50c7d59c21a9f66e436a1abce0a34093be97e700" Feb 02 18:13:28 crc kubenswrapper[4858]: I0202 18:13:28.180658 4858 scope.go:117] "RemoveContainer" containerID="f3add9d27c92e4395d689c9a5be56246e7166ce66a96334c7454ddd62cad28bd" Feb 02 18:13:28 crc kubenswrapper[4858]: I0202 18:13:28.196701 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mrr7j"] Feb 02 18:13:28 crc kubenswrapper[4858]: I0202 18:13:28.211515 4858 scope.go:117] "RemoveContainer" containerID="3dc2c27d900ec44d5b79922b601a67887d66617baea66cc0d24643feee5886d2" Feb 02 18:13:28 crc kubenswrapper[4858]: I0202 18:13:28.213362 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mrr7j"] Feb 02 18:13:28 crc kubenswrapper[4858]: I0202 18:13:28.259080 4858 scope.go:117] "RemoveContainer" containerID="f105383399789a73ba7f399e50c7d59c21a9f66e436a1abce0a34093be97e700" Feb 02 18:13:28 crc kubenswrapper[4858]: E0202 18:13:28.260180 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f105383399789a73ba7f399e50c7d59c21a9f66e436a1abce0a34093be97e700\": container with ID starting with f105383399789a73ba7f399e50c7d59c21a9f66e436a1abce0a34093be97e700 not found: ID does not exist" containerID="f105383399789a73ba7f399e50c7d59c21a9f66e436a1abce0a34093be97e700" Feb 02 18:13:28 crc kubenswrapper[4858]: I0202 18:13:28.260219 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f105383399789a73ba7f399e50c7d59c21a9f66e436a1abce0a34093be97e700"} err="failed to get container status \"f105383399789a73ba7f399e50c7d59c21a9f66e436a1abce0a34093be97e700\": rpc error: code = NotFound desc = could not find container \"f105383399789a73ba7f399e50c7d59c21a9f66e436a1abce0a34093be97e700\": container with ID starting with f105383399789a73ba7f399e50c7d59c21a9f66e436a1abce0a34093be97e700 not found: ID does not exist" Feb 02 18:13:28 crc kubenswrapper[4858]: I0202 18:13:28.260248 4858 scope.go:117] "RemoveContainer" containerID="f3add9d27c92e4395d689c9a5be56246e7166ce66a96334c7454ddd62cad28bd" Feb 02 18:13:28 crc kubenswrapper[4858]: E0202 18:13:28.261773 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3add9d27c92e4395d689c9a5be56246e7166ce66a96334c7454ddd62cad28bd\": container with ID starting with f3add9d27c92e4395d689c9a5be56246e7166ce66a96334c7454ddd62cad28bd not found: ID does not exist" containerID="f3add9d27c92e4395d689c9a5be56246e7166ce66a96334c7454ddd62cad28bd" Feb 02 18:13:28 crc kubenswrapper[4858]: I0202 18:13:28.261800 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3add9d27c92e4395d689c9a5be56246e7166ce66a96334c7454ddd62cad28bd"} err="failed to get container status \"f3add9d27c92e4395d689c9a5be56246e7166ce66a96334c7454ddd62cad28bd\": rpc error: code = NotFound desc = could not find container \"f3add9d27c92e4395d689c9a5be56246e7166ce66a96334c7454ddd62cad28bd\": container with ID starting with f3add9d27c92e4395d689c9a5be56246e7166ce66a96334c7454ddd62cad28bd not found: ID does not exist" Feb 02 18:13:28 crc kubenswrapper[4858]: I0202 18:13:28.261817 4858 scope.go:117] "RemoveContainer" containerID="3dc2c27d900ec44d5b79922b601a67887d66617baea66cc0d24643feee5886d2" Feb 02 18:13:28 crc kubenswrapper[4858]: E0202 18:13:28.262248 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dc2c27d900ec44d5b79922b601a67887d66617baea66cc0d24643feee5886d2\": container with ID starting with 3dc2c27d900ec44d5b79922b601a67887d66617baea66cc0d24643feee5886d2 not found: ID does not exist" containerID="3dc2c27d900ec44d5b79922b601a67887d66617baea66cc0d24643feee5886d2" Feb 02 18:13:28 crc kubenswrapper[4858]: I0202 18:13:28.262277 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dc2c27d900ec44d5b79922b601a67887d66617baea66cc0d24643feee5886d2"} err="failed to get container status \"3dc2c27d900ec44d5b79922b601a67887d66617baea66cc0d24643feee5886d2\": rpc error: code = NotFound desc = could not find container \"3dc2c27d900ec44d5b79922b601a67887d66617baea66cc0d24643feee5886d2\": container with ID starting with 3dc2c27d900ec44d5b79922b601a67887d66617baea66cc0d24643feee5886d2 not found: ID does not exist" Feb 02 18:13:28 crc kubenswrapper[4858]: I0202 18:13:28.411807 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2080a357-c371-422d-b95d-a830f59d4838" path="/var/lib/kubelet/pods/2080a357-c371-422d-b95d-a830f59d4838/volumes" Feb 02 18:13:29 crc kubenswrapper[4858]: I0202 18:13:29.899957 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zj9ns"] Feb 02 18:13:29 crc kubenswrapper[4858]: E0202 18:13:29.900716 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2080a357-c371-422d-b95d-a830f59d4838" containerName="extract-content" Feb 02 18:13:29 crc kubenswrapper[4858]: I0202 18:13:29.900731 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2080a357-c371-422d-b95d-a830f59d4838" containerName="extract-content" Feb 02 18:13:29 crc kubenswrapper[4858]: E0202 18:13:29.900744 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2080a357-c371-422d-b95d-a830f59d4838" containerName="extract-utilities" Feb 02 18:13:29 crc kubenswrapper[4858]: I0202 18:13:29.900751 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2080a357-c371-422d-b95d-a830f59d4838" containerName="extract-utilities" Feb 02 18:13:29 crc kubenswrapper[4858]: E0202 18:13:29.900773 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2080a357-c371-422d-b95d-a830f59d4838" containerName="registry-server" Feb 02 18:13:29 crc kubenswrapper[4858]: I0202 18:13:29.900781 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2080a357-c371-422d-b95d-a830f59d4838" containerName="registry-server" Feb 02 18:13:29 crc kubenswrapper[4858]: I0202 18:13:29.901050 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2080a357-c371-422d-b95d-a830f59d4838" containerName="registry-server" Feb 02 18:13:29 crc kubenswrapper[4858]: I0202 18:13:29.903443 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zj9ns" Feb 02 18:13:29 crc kubenswrapper[4858]: I0202 18:13:29.917129 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zj9ns"] Feb 02 18:13:30 crc kubenswrapper[4858]: I0202 18:13:30.034303 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48a66575-f421-49c6-b952-e9145a528831-catalog-content\") pod \"certified-operators-zj9ns\" (UID: \"48a66575-f421-49c6-b952-e9145a528831\") " pod="openshift-marketplace/certified-operators-zj9ns" Feb 02 18:13:30 crc kubenswrapper[4858]: I0202 18:13:30.034595 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48a66575-f421-49c6-b952-e9145a528831-utilities\") pod \"certified-operators-zj9ns\" (UID: \"48a66575-f421-49c6-b952-e9145a528831\") " pod="openshift-marketplace/certified-operators-zj9ns" Feb 02 18:13:30 crc kubenswrapper[4858]: I0202 18:13:30.034706 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k4l4\" (UniqueName: \"kubernetes.io/projected/48a66575-f421-49c6-b952-e9145a528831-kube-api-access-4k4l4\") pod \"certified-operators-zj9ns\" (UID: \"48a66575-f421-49c6-b952-e9145a528831\") " pod="openshift-marketplace/certified-operators-zj9ns" Feb 02 18:13:30 crc kubenswrapper[4858]: I0202 18:13:30.137343 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48a66575-f421-49c6-b952-e9145a528831-catalog-content\") pod \"certified-operators-zj9ns\" (UID: \"48a66575-f421-49c6-b952-e9145a528831\") " pod="openshift-marketplace/certified-operators-zj9ns" Feb 02 18:13:30 crc kubenswrapper[4858]: I0202 18:13:30.137747 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48a66575-f421-49c6-b952-e9145a528831-utilities\") pod \"certified-operators-zj9ns\" (UID: \"48a66575-f421-49c6-b952-e9145a528831\") " pod="openshift-marketplace/certified-operators-zj9ns" Feb 02 18:13:30 crc kubenswrapper[4858]: I0202 18:13:30.138112 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k4l4\" (UniqueName: \"kubernetes.io/projected/48a66575-f421-49c6-b952-e9145a528831-kube-api-access-4k4l4\") pod \"certified-operators-zj9ns\" (UID: \"48a66575-f421-49c6-b952-e9145a528831\") " pod="openshift-marketplace/certified-operators-zj9ns" Feb 02 18:13:30 crc kubenswrapper[4858]: I0202 18:13:30.138295 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48a66575-f421-49c6-b952-e9145a528831-utilities\") pod \"certified-operators-zj9ns\" (UID: \"48a66575-f421-49c6-b952-e9145a528831\") " pod="openshift-marketplace/certified-operators-zj9ns" Feb 02 18:13:30 crc kubenswrapper[4858]: I0202 18:13:30.138582 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48a66575-f421-49c6-b952-e9145a528831-catalog-content\") pod \"certified-operators-zj9ns\" (UID: \"48a66575-f421-49c6-b952-e9145a528831\") " pod="openshift-marketplace/certified-operators-zj9ns" Feb 02 18:13:30 crc kubenswrapper[4858]: I0202 18:13:30.163504 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k4l4\" (UniqueName: \"kubernetes.io/projected/48a66575-f421-49c6-b952-e9145a528831-kube-api-access-4k4l4\") pod \"certified-operators-zj9ns\" (UID: \"48a66575-f421-49c6-b952-e9145a528831\") " pod="openshift-marketplace/certified-operators-zj9ns" Feb 02 18:13:30 crc kubenswrapper[4858]: I0202 18:13:30.231255 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zj9ns" Feb 02 18:13:30 crc kubenswrapper[4858]: I0202 18:13:30.758677 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zj9ns"] Feb 02 18:13:31 crc kubenswrapper[4858]: I0202 18:13:31.184845 4858 generic.go:334] "Generic (PLEG): container finished" podID="48a66575-f421-49c6-b952-e9145a528831" containerID="ee206edffae92c86169827dae66880636228e4db08f71d6e35ab931238da90eb" exitCode=0 Feb 02 18:13:31 crc kubenswrapper[4858]: I0202 18:13:31.184939 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zj9ns" event={"ID":"48a66575-f421-49c6-b952-e9145a528831","Type":"ContainerDied","Data":"ee206edffae92c86169827dae66880636228e4db08f71d6e35ab931238da90eb"} Feb 02 18:13:31 crc kubenswrapper[4858]: I0202 18:13:31.186352 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zj9ns" event={"ID":"48a66575-f421-49c6-b952-e9145a528831","Type":"ContainerStarted","Data":"63e764cef1b99de9e693aea419ffb3ab5f349ac87f5d898200e5e0b5886e97b5"} Feb 02 18:13:32 crc kubenswrapper[4858]: I0202 18:13:32.196059 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zj9ns" event={"ID":"48a66575-f421-49c6-b952-e9145a528831","Type":"ContainerStarted","Data":"1c9b91f9d3688d8b9568c098f53439a65273071be7b1fe083ce6169ec2a19d39"} Feb 02 18:13:33 crc kubenswrapper[4858]: I0202 18:13:33.208743 4858 generic.go:334] "Generic (PLEG): container finished" podID="48a66575-f421-49c6-b952-e9145a528831" containerID="1c9b91f9d3688d8b9568c098f53439a65273071be7b1fe083ce6169ec2a19d39" exitCode=0 Feb 02 18:13:33 crc kubenswrapper[4858]: I0202 18:13:33.209101 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zj9ns" event={"ID":"48a66575-f421-49c6-b952-e9145a528831","Type":"ContainerDied","Data":"1c9b91f9d3688d8b9568c098f53439a65273071be7b1fe083ce6169ec2a19d39"} Feb 02 18:13:34 crc kubenswrapper[4858]: I0202 18:13:34.219495 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zj9ns" event={"ID":"48a66575-f421-49c6-b952-e9145a528831","Type":"ContainerStarted","Data":"3651a9ceae6ae6ecbbba2275c057732b862dc5e526c860f4c1536aec8550be1e"} Feb 02 18:13:34 crc kubenswrapper[4858]: I0202 18:13:34.248296 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zj9ns" podStartSLOduration=2.808871462 podStartE2EDuration="5.248272327s" podCreationTimestamp="2026-02-02 18:13:29 +0000 UTC" firstStartedPulling="2026-02-02 18:13:31.186440544 +0000 UTC m=+3512.338855809" lastFinishedPulling="2026-02-02 18:13:33.625841409 +0000 UTC m=+3514.778256674" observedRunningTime="2026-02-02 18:13:34.244383216 +0000 UTC m=+3515.396798481" watchObservedRunningTime="2026-02-02 18:13:34.248272327 +0000 UTC m=+3515.400687592" Feb 02 18:13:40 crc kubenswrapper[4858]: I0202 18:13:40.232011 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zj9ns" Feb 02 18:13:40 crc kubenswrapper[4858]: I0202 18:13:40.232428 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zj9ns" Feb 02 18:13:40 crc kubenswrapper[4858]: I0202 18:13:40.278602 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zj9ns" Feb 02 18:13:40 crc kubenswrapper[4858]: I0202 18:13:40.362385 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zj9ns" Feb 02 18:13:40 crc kubenswrapper[4858]: I0202 18:13:40.516688 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zj9ns"] Feb 02 18:13:42 crc kubenswrapper[4858]: I0202 18:13:42.346778 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zj9ns" podUID="48a66575-f421-49c6-b952-e9145a528831" containerName="registry-server" containerID="cri-o://3651a9ceae6ae6ecbbba2275c057732b862dc5e526c860f4c1536aec8550be1e" gracePeriod=2 Feb 02 18:13:42 crc kubenswrapper[4858]: I0202 18:13:42.820188 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zj9ns" Feb 02 18:13:42 crc kubenswrapper[4858]: I0202 18:13:42.995234 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48a66575-f421-49c6-b952-e9145a528831-utilities\") pod \"48a66575-f421-49c6-b952-e9145a528831\" (UID: \"48a66575-f421-49c6-b952-e9145a528831\") " Feb 02 18:13:42 crc kubenswrapper[4858]: I0202 18:13:42.995316 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4k4l4\" (UniqueName: \"kubernetes.io/projected/48a66575-f421-49c6-b952-e9145a528831-kube-api-access-4k4l4\") pod \"48a66575-f421-49c6-b952-e9145a528831\" (UID: \"48a66575-f421-49c6-b952-e9145a528831\") " Feb 02 18:13:42 crc kubenswrapper[4858]: I0202 18:13:42.995362 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48a66575-f421-49c6-b952-e9145a528831-catalog-content\") pod \"48a66575-f421-49c6-b952-e9145a528831\" (UID: \"48a66575-f421-49c6-b952-e9145a528831\") " Feb 02 18:13:42 crc kubenswrapper[4858]: I0202 18:13:42.996775 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48a66575-f421-49c6-b952-e9145a528831-utilities" (OuterVolumeSpecName: "utilities") pod "48a66575-f421-49c6-b952-e9145a528831" (UID: "48a66575-f421-49c6-b952-e9145a528831"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 18:13:43 crc kubenswrapper[4858]: I0202 18:13:43.001826 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48a66575-f421-49c6-b952-e9145a528831-kube-api-access-4k4l4" (OuterVolumeSpecName: "kube-api-access-4k4l4") pod "48a66575-f421-49c6-b952-e9145a528831" (UID: "48a66575-f421-49c6-b952-e9145a528831"). InnerVolumeSpecName "kube-api-access-4k4l4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 18:13:43 crc kubenswrapper[4858]: I0202 18:13:43.049186 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48a66575-f421-49c6-b952-e9145a528831-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48a66575-f421-49c6-b952-e9145a528831" (UID: "48a66575-f421-49c6-b952-e9145a528831"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 18:13:43 crc kubenswrapper[4858]: I0202 18:13:43.098272 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48a66575-f421-49c6-b952-e9145a528831-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 18:13:43 crc kubenswrapper[4858]: I0202 18:13:43.098363 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4k4l4\" (UniqueName: \"kubernetes.io/projected/48a66575-f421-49c6-b952-e9145a528831-kube-api-access-4k4l4\") on node \"crc\" DevicePath \"\"" Feb 02 18:13:43 crc kubenswrapper[4858]: I0202 18:13:43.098380 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48a66575-f421-49c6-b952-e9145a528831-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 18:13:43 crc kubenswrapper[4858]: I0202 18:13:43.357294 4858 generic.go:334] "Generic (PLEG): container finished" podID="48a66575-f421-49c6-b952-e9145a528831" containerID="3651a9ceae6ae6ecbbba2275c057732b862dc5e526c860f4c1536aec8550be1e" exitCode=0 Feb 02 18:13:43 crc kubenswrapper[4858]: I0202 18:13:43.357379 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zj9ns" event={"ID":"48a66575-f421-49c6-b952-e9145a528831","Type":"ContainerDied","Data":"3651a9ceae6ae6ecbbba2275c057732b862dc5e526c860f4c1536aec8550be1e"} Feb 02 18:13:43 crc kubenswrapper[4858]: I0202 18:13:43.358572 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zj9ns" event={"ID":"48a66575-f421-49c6-b952-e9145a528831","Type":"ContainerDied","Data":"63e764cef1b99de9e693aea419ffb3ab5f349ac87f5d898200e5e0b5886e97b5"} Feb 02 18:13:43 crc kubenswrapper[4858]: I0202 18:13:43.357413 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zj9ns" Feb 02 18:13:43 crc kubenswrapper[4858]: I0202 18:13:43.358637 4858 scope.go:117] "RemoveContainer" containerID="3651a9ceae6ae6ecbbba2275c057732b862dc5e526c860f4c1536aec8550be1e" Feb 02 18:13:43 crc kubenswrapper[4858]: I0202 18:13:43.394515 4858 scope.go:117] "RemoveContainer" containerID="1c9b91f9d3688d8b9568c098f53439a65273071be7b1fe083ce6169ec2a19d39" Feb 02 18:13:43 crc kubenswrapper[4858]: I0202 18:13:43.399804 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zj9ns"] Feb 02 18:13:43 crc kubenswrapper[4858]: I0202 18:13:43.413813 4858 scope.go:117] "RemoveContainer" containerID="ee206edffae92c86169827dae66880636228e4db08f71d6e35ab931238da90eb" Feb 02 18:13:43 crc kubenswrapper[4858]: I0202 18:13:43.414858 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zj9ns"] Feb 02 18:13:43 crc kubenswrapper[4858]: I0202 18:13:43.462387 4858 scope.go:117] "RemoveContainer" containerID="3651a9ceae6ae6ecbbba2275c057732b862dc5e526c860f4c1536aec8550be1e" Feb 02 18:13:43 crc kubenswrapper[4858]: E0202 18:13:43.462933 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3651a9ceae6ae6ecbbba2275c057732b862dc5e526c860f4c1536aec8550be1e\": container with ID starting with 3651a9ceae6ae6ecbbba2275c057732b862dc5e526c860f4c1536aec8550be1e not found: ID does not exist" containerID="3651a9ceae6ae6ecbbba2275c057732b862dc5e526c860f4c1536aec8550be1e" Feb 02 18:13:43 crc kubenswrapper[4858]: I0202 18:13:43.463004 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3651a9ceae6ae6ecbbba2275c057732b862dc5e526c860f4c1536aec8550be1e"} err="failed to get container status \"3651a9ceae6ae6ecbbba2275c057732b862dc5e526c860f4c1536aec8550be1e\": rpc error: code = NotFound desc = could not find container \"3651a9ceae6ae6ecbbba2275c057732b862dc5e526c860f4c1536aec8550be1e\": container with ID starting with 3651a9ceae6ae6ecbbba2275c057732b862dc5e526c860f4c1536aec8550be1e not found: ID does not exist" Feb 02 18:13:43 crc kubenswrapper[4858]: I0202 18:13:43.463042 4858 scope.go:117] "RemoveContainer" containerID="1c9b91f9d3688d8b9568c098f53439a65273071be7b1fe083ce6169ec2a19d39" Feb 02 18:13:43 crc kubenswrapper[4858]: E0202 18:13:43.463487 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c9b91f9d3688d8b9568c098f53439a65273071be7b1fe083ce6169ec2a19d39\": container with ID starting with 1c9b91f9d3688d8b9568c098f53439a65273071be7b1fe083ce6169ec2a19d39 not found: ID does not exist" containerID="1c9b91f9d3688d8b9568c098f53439a65273071be7b1fe083ce6169ec2a19d39" Feb 02 18:13:43 crc kubenswrapper[4858]: I0202 18:13:43.463521 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c9b91f9d3688d8b9568c098f53439a65273071be7b1fe083ce6169ec2a19d39"} err="failed to get container status \"1c9b91f9d3688d8b9568c098f53439a65273071be7b1fe083ce6169ec2a19d39\": rpc error: code = NotFound desc = could not find container \"1c9b91f9d3688d8b9568c098f53439a65273071be7b1fe083ce6169ec2a19d39\": container with ID starting with 1c9b91f9d3688d8b9568c098f53439a65273071be7b1fe083ce6169ec2a19d39 not found: ID does not exist" Feb 02 18:13:43 crc kubenswrapper[4858]: I0202 18:13:43.463545 4858 scope.go:117] "RemoveContainer" containerID="ee206edffae92c86169827dae66880636228e4db08f71d6e35ab931238da90eb" Feb 02 18:13:43 crc kubenswrapper[4858]: E0202 18:13:43.463802 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee206edffae92c86169827dae66880636228e4db08f71d6e35ab931238da90eb\": container with ID starting with ee206edffae92c86169827dae66880636228e4db08f71d6e35ab931238da90eb not found: ID does not exist" containerID="ee206edffae92c86169827dae66880636228e4db08f71d6e35ab931238da90eb" Feb 02 18:13:43 crc kubenswrapper[4858]: I0202 18:13:43.463840 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee206edffae92c86169827dae66880636228e4db08f71d6e35ab931238da90eb"} err="failed to get container status \"ee206edffae92c86169827dae66880636228e4db08f71d6e35ab931238da90eb\": rpc error: code = NotFound desc = could not find container \"ee206edffae92c86169827dae66880636228e4db08f71d6e35ab931238da90eb\": container with ID starting with ee206edffae92c86169827dae66880636228e4db08f71d6e35ab931238da90eb not found: ID does not exist" Feb 02 18:13:44 crc kubenswrapper[4858]: I0202 18:13:44.411954 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48a66575-f421-49c6-b952-e9145a528831" path="/var/lib/kubelet/pods/48a66575-f421-49c6-b952-e9145a528831/volumes" Feb 02 18:14:27 crc kubenswrapper[4858]: I0202 18:14:27.807275 4858 patch_prober.go:28] interesting pod/machine-config-daemon-lbvl2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 18:14:27 crc kubenswrapper[4858]: I0202 18:14:27.807891 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 18:14:41 crc kubenswrapper[4858]: I0202 18:14:41.172869 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jbbrv"] Feb 02 18:14:41 crc kubenswrapper[4858]: E0202 18:14:41.174553 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48a66575-f421-49c6-b952-e9145a528831" containerName="extract-utilities" Feb 02 18:14:41 crc kubenswrapper[4858]: I0202 18:14:41.174570 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="48a66575-f421-49c6-b952-e9145a528831" containerName="extract-utilities" Feb 02 18:14:41 crc kubenswrapper[4858]: E0202 18:14:41.174582 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48a66575-f421-49c6-b952-e9145a528831" containerName="registry-server" Feb 02 18:14:41 crc kubenswrapper[4858]: I0202 18:14:41.174589 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="48a66575-f421-49c6-b952-e9145a528831" containerName="registry-server" Feb 02 18:14:41 crc kubenswrapper[4858]: E0202 18:14:41.174608 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48a66575-f421-49c6-b952-e9145a528831" containerName="extract-content" Feb 02 18:14:41 crc kubenswrapper[4858]: I0202 18:14:41.174615 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="48a66575-f421-49c6-b952-e9145a528831" containerName="extract-content" Feb 02 18:14:41 crc kubenswrapper[4858]: I0202 18:14:41.174816 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="48a66575-f421-49c6-b952-e9145a528831" containerName="registry-server" Feb 02 18:14:41 crc kubenswrapper[4858]: I0202 18:14:41.178469 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jbbrv" Feb 02 18:14:41 crc kubenswrapper[4858]: I0202 18:14:41.184314 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jbbrv"] Feb 02 18:14:41 crc kubenswrapper[4858]: I0202 18:14:41.263878 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/822f4e7a-8717-4be6-9489-d6b7761c4590-catalog-content\") pod \"redhat-operators-jbbrv\" (UID: \"822f4e7a-8717-4be6-9489-d6b7761c4590\") " pod="openshift-marketplace/redhat-operators-jbbrv" Feb 02 18:14:41 crc kubenswrapper[4858]: I0202 18:14:41.264016 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/822f4e7a-8717-4be6-9489-d6b7761c4590-utilities\") pod \"redhat-operators-jbbrv\" (UID: \"822f4e7a-8717-4be6-9489-d6b7761c4590\") " pod="openshift-marketplace/redhat-operators-jbbrv" Feb 02 18:14:41 crc kubenswrapper[4858]: I0202 18:14:41.264092 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzqll\" (UniqueName: \"kubernetes.io/projected/822f4e7a-8717-4be6-9489-d6b7761c4590-kube-api-access-zzqll\") pod \"redhat-operators-jbbrv\" (UID: \"822f4e7a-8717-4be6-9489-d6b7761c4590\") " pod="openshift-marketplace/redhat-operators-jbbrv" Feb 02 18:14:41 crc kubenswrapper[4858]: I0202 18:14:41.366229 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/822f4e7a-8717-4be6-9489-d6b7761c4590-catalog-content\") pod \"redhat-operators-jbbrv\" (UID: \"822f4e7a-8717-4be6-9489-d6b7761c4590\") " pod="openshift-marketplace/redhat-operators-jbbrv" Feb 02 18:14:41 crc kubenswrapper[4858]: I0202 18:14:41.366640 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/822f4e7a-8717-4be6-9489-d6b7761c4590-utilities\") pod \"redhat-operators-jbbrv\" (UID: \"822f4e7a-8717-4be6-9489-d6b7761c4590\") " pod="openshift-marketplace/redhat-operators-jbbrv" Feb 02 18:14:41 crc kubenswrapper[4858]: I0202 18:14:41.366802 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzqll\" (UniqueName: \"kubernetes.io/projected/822f4e7a-8717-4be6-9489-d6b7761c4590-kube-api-access-zzqll\") pod \"redhat-operators-jbbrv\" (UID: \"822f4e7a-8717-4be6-9489-d6b7761c4590\") " pod="openshift-marketplace/redhat-operators-jbbrv" Feb 02 18:14:41 crc kubenswrapper[4858]: I0202 18:14:41.366989 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/822f4e7a-8717-4be6-9489-d6b7761c4590-utilities\") pod \"redhat-operators-jbbrv\" (UID: \"822f4e7a-8717-4be6-9489-d6b7761c4590\") " pod="openshift-marketplace/redhat-operators-jbbrv" Feb 02 18:14:41 crc kubenswrapper[4858]: I0202 18:14:41.366827 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/822f4e7a-8717-4be6-9489-d6b7761c4590-catalog-content\") pod \"redhat-operators-jbbrv\" (UID: \"822f4e7a-8717-4be6-9489-d6b7761c4590\") " pod="openshift-marketplace/redhat-operators-jbbrv" Feb 02 18:14:41 crc kubenswrapper[4858]: I0202 18:14:41.403202 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzqll\" (UniqueName: \"kubernetes.io/projected/822f4e7a-8717-4be6-9489-d6b7761c4590-kube-api-access-zzqll\") pod \"redhat-operators-jbbrv\" (UID: \"822f4e7a-8717-4be6-9489-d6b7761c4590\") " pod="openshift-marketplace/redhat-operators-jbbrv" Feb 02 18:14:41 crc kubenswrapper[4858]: I0202 18:14:41.512904 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jbbrv" Feb 02 18:14:42 crc kubenswrapper[4858]: I0202 18:14:42.191038 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jbbrv"] Feb 02 18:14:42 crc kubenswrapper[4858]: I0202 18:14:42.938130 4858 generic.go:334] "Generic (PLEG): container finished" podID="822f4e7a-8717-4be6-9489-d6b7761c4590" containerID="82c579826d5b6fc66fb962a48b7d17e95e22c0626555a23fe2825b2dc4fe2535" exitCode=0 Feb 02 18:14:42 crc kubenswrapper[4858]: I0202 18:14:42.938209 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jbbrv" event={"ID":"822f4e7a-8717-4be6-9489-d6b7761c4590","Type":"ContainerDied","Data":"82c579826d5b6fc66fb962a48b7d17e95e22c0626555a23fe2825b2dc4fe2535"} Feb 02 18:14:42 crc kubenswrapper[4858]: I0202 18:14:42.938649 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jbbrv" event={"ID":"822f4e7a-8717-4be6-9489-d6b7761c4590","Type":"ContainerStarted","Data":"dab12c700bec44d4440f2050f0a88848af18fff9904294c1a24f78553c51672d"} Feb 02 18:14:42 crc kubenswrapper[4858]: I0202 18:14:42.940433 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 18:14:42 crc kubenswrapper[4858]: I0202 18:14:42.943549 4858 generic.go:334] "Generic (PLEG): container finished" podID="ae31fff3-88d5-48ac-8d5a-b732e04c158b" containerID="10e0d3ed8756b2b1d309fec98a2ee73afda935df5fde314675607f9087635e39" exitCode=0 Feb 02 18:14:42 crc kubenswrapper[4858]: I0202 18:14:42.943608 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7gxhs/must-gather-6qlvm" event={"ID":"ae31fff3-88d5-48ac-8d5a-b732e04c158b","Type":"ContainerDied","Data":"10e0d3ed8756b2b1d309fec98a2ee73afda935df5fde314675607f9087635e39"} Feb 02 18:14:42 crc kubenswrapper[4858]: I0202 18:14:42.944812 4858 scope.go:117] "RemoveContainer" containerID="10e0d3ed8756b2b1d309fec98a2ee73afda935df5fde314675607f9087635e39" Feb 02 18:14:43 crc kubenswrapper[4858]: I0202 18:14:43.859044 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7gxhs_must-gather-6qlvm_ae31fff3-88d5-48ac-8d5a-b732e04c158b/gather/0.log" Feb 02 18:14:43 crc kubenswrapper[4858]: I0202 18:14:43.955796 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jbbrv" event={"ID":"822f4e7a-8717-4be6-9489-d6b7761c4590","Type":"ContainerStarted","Data":"e29b7958d3a953ac31a960e06c92656b5cd25e5ba2d32dac1d2e193505a1f84f"} Feb 02 18:14:44 crc kubenswrapper[4858]: I0202 18:14:44.969105 4858 generic.go:334] "Generic (PLEG): container finished" podID="822f4e7a-8717-4be6-9489-d6b7761c4590" containerID="e29b7958d3a953ac31a960e06c92656b5cd25e5ba2d32dac1d2e193505a1f84f" exitCode=0 Feb 02 18:14:44 crc kubenswrapper[4858]: I0202 18:14:44.969801 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jbbrv" event={"ID":"822f4e7a-8717-4be6-9489-d6b7761c4590","Type":"ContainerDied","Data":"e29b7958d3a953ac31a960e06c92656b5cd25e5ba2d32dac1d2e193505a1f84f"} Feb 02 18:14:45 crc kubenswrapper[4858]: I0202 18:14:45.984320 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jbbrv" event={"ID":"822f4e7a-8717-4be6-9489-d6b7761c4590","Type":"ContainerStarted","Data":"e0819858cffbfcafb560dd87cd3f4da238e935109594467f3ab30d2ed708b770"} Feb 02 18:14:46 crc kubenswrapper[4858]: I0202 18:14:46.010354 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jbbrv" podStartSLOduration=2.445804852 podStartE2EDuration="5.010332557s" podCreationTimestamp="2026-02-02 18:14:41 +0000 UTC" firstStartedPulling="2026-02-02 18:14:42.940171945 +0000 UTC m=+3584.092587210" lastFinishedPulling="2026-02-02 18:14:45.50469965 +0000 UTC m=+3586.657114915" observedRunningTime="2026-02-02 18:14:46.004809309 +0000 UTC m=+3587.157224594" watchObservedRunningTime="2026-02-02 18:14:46.010332557 +0000 UTC m=+3587.162747822" Feb 02 18:14:51 crc kubenswrapper[4858]: I0202 18:14:51.513361 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jbbrv" Feb 02 18:14:51 crc kubenswrapper[4858]: I0202 18:14:51.513732 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jbbrv" Feb 02 18:14:51 crc kubenswrapper[4858]: I0202 18:14:51.565253 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jbbrv" Feb 02 18:14:52 crc kubenswrapper[4858]: I0202 18:14:52.091001 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jbbrv" Feb 02 18:14:52 crc kubenswrapper[4858]: I0202 18:14:52.140793 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jbbrv"] Feb 02 18:14:52 crc kubenswrapper[4858]: I0202 18:14:52.385543 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7gxhs/must-gather-6qlvm"] Feb 02 18:14:52 crc kubenswrapper[4858]: I0202 18:14:52.386193 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-7gxhs/must-gather-6qlvm" podUID="ae31fff3-88d5-48ac-8d5a-b732e04c158b" containerName="copy" containerID="cri-o://2526d249755c39b64700708da38287359b01ffe7e219e337eba79662f0a0d70d" gracePeriod=2 Feb 02 18:14:52 crc kubenswrapper[4858]: I0202 18:14:52.396413 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7gxhs/must-gather-6qlvm"] Feb 02 18:14:52 crc kubenswrapper[4858]: I0202 18:14:52.872159 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7gxhs_must-gather-6qlvm_ae31fff3-88d5-48ac-8d5a-b732e04c158b/copy/0.log" Feb 02 18:14:52 crc kubenswrapper[4858]: I0202 18:14:52.873682 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7gxhs/must-gather-6qlvm" Feb 02 18:14:53 crc kubenswrapper[4858]: I0202 18:14:53.002610 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ae31fff3-88d5-48ac-8d5a-b732e04c158b-must-gather-output\") pod \"ae31fff3-88d5-48ac-8d5a-b732e04c158b\" (UID: \"ae31fff3-88d5-48ac-8d5a-b732e04c158b\") " Feb 02 18:14:53 crc kubenswrapper[4858]: I0202 18:14:53.002938 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnlkd\" (UniqueName: \"kubernetes.io/projected/ae31fff3-88d5-48ac-8d5a-b732e04c158b-kube-api-access-pnlkd\") pod \"ae31fff3-88d5-48ac-8d5a-b732e04c158b\" (UID: \"ae31fff3-88d5-48ac-8d5a-b732e04c158b\") " Feb 02 18:14:53 crc kubenswrapper[4858]: I0202 18:14:53.009179 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae31fff3-88d5-48ac-8d5a-b732e04c158b-kube-api-access-pnlkd" (OuterVolumeSpecName: "kube-api-access-pnlkd") pod "ae31fff3-88d5-48ac-8d5a-b732e04c158b" (UID: "ae31fff3-88d5-48ac-8d5a-b732e04c158b"). InnerVolumeSpecName "kube-api-access-pnlkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 18:14:53 crc kubenswrapper[4858]: I0202 18:14:53.063052 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7gxhs_must-gather-6qlvm_ae31fff3-88d5-48ac-8d5a-b732e04c158b/copy/0.log" Feb 02 18:14:53 crc kubenswrapper[4858]: I0202 18:14:53.064099 4858 generic.go:334] "Generic (PLEG): container finished" podID="ae31fff3-88d5-48ac-8d5a-b732e04c158b" containerID="2526d249755c39b64700708da38287359b01ffe7e219e337eba79662f0a0d70d" exitCode=143 Feb 02 18:14:53 crc kubenswrapper[4858]: I0202 18:14:53.064276 4858 scope.go:117] "RemoveContainer" containerID="2526d249755c39b64700708da38287359b01ffe7e219e337eba79662f0a0d70d" Feb 02 18:14:53 crc kubenswrapper[4858]: I0202 18:14:53.064541 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7gxhs/must-gather-6qlvm" Feb 02 18:14:53 crc kubenswrapper[4858]: I0202 18:14:53.092544 4858 scope.go:117] "RemoveContainer" containerID="10e0d3ed8756b2b1d309fec98a2ee73afda935df5fde314675607f9087635e39" Feb 02 18:14:53 crc kubenswrapper[4858]: I0202 18:14:53.104778 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnlkd\" (UniqueName: \"kubernetes.io/projected/ae31fff3-88d5-48ac-8d5a-b732e04c158b-kube-api-access-pnlkd\") on node \"crc\" DevicePath \"\"" Feb 02 18:14:53 crc kubenswrapper[4858]: I0202 18:14:53.164970 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae31fff3-88d5-48ac-8d5a-b732e04c158b-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "ae31fff3-88d5-48ac-8d5a-b732e04c158b" (UID: "ae31fff3-88d5-48ac-8d5a-b732e04c158b"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 18:14:53 crc kubenswrapper[4858]: I0202 18:14:53.192742 4858 scope.go:117] "RemoveContainer" containerID="2526d249755c39b64700708da38287359b01ffe7e219e337eba79662f0a0d70d" Feb 02 18:14:53 crc kubenswrapper[4858]: E0202 18:14:53.193225 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2526d249755c39b64700708da38287359b01ffe7e219e337eba79662f0a0d70d\": container with ID starting with 2526d249755c39b64700708da38287359b01ffe7e219e337eba79662f0a0d70d not found: ID does not exist" containerID="2526d249755c39b64700708da38287359b01ffe7e219e337eba79662f0a0d70d" Feb 02 18:14:53 crc kubenswrapper[4858]: I0202 18:14:53.193272 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2526d249755c39b64700708da38287359b01ffe7e219e337eba79662f0a0d70d"} err="failed to get container status \"2526d249755c39b64700708da38287359b01ffe7e219e337eba79662f0a0d70d\": rpc error: code = NotFound desc = could not find container \"2526d249755c39b64700708da38287359b01ffe7e219e337eba79662f0a0d70d\": container with ID starting with 2526d249755c39b64700708da38287359b01ffe7e219e337eba79662f0a0d70d not found: ID does not exist" Feb 02 18:14:53 crc kubenswrapper[4858]: I0202 18:14:53.193313 4858 scope.go:117] "RemoveContainer" containerID="10e0d3ed8756b2b1d309fec98a2ee73afda935df5fde314675607f9087635e39" Feb 02 18:14:53 crc kubenswrapper[4858]: E0202 18:14:53.193721 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10e0d3ed8756b2b1d309fec98a2ee73afda935df5fde314675607f9087635e39\": container with ID starting with 10e0d3ed8756b2b1d309fec98a2ee73afda935df5fde314675607f9087635e39 not found: ID does not exist" containerID="10e0d3ed8756b2b1d309fec98a2ee73afda935df5fde314675607f9087635e39" Feb 02 18:14:53 crc kubenswrapper[4858]: I0202 18:14:53.193757 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10e0d3ed8756b2b1d309fec98a2ee73afda935df5fde314675607f9087635e39"} err="failed to get container status \"10e0d3ed8756b2b1d309fec98a2ee73afda935df5fde314675607f9087635e39\": rpc error: code = NotFound desc = could not find container \"10e0d3ed8756b2b1d309fec98a2ee73afda935df5fde314675607f9087635e39\": container with ID starting with 10e0d3ed8756b2b1d309fec98a2ee73afda935df5fde314675607f9087635e39 not found: ID does not exist" Feb 02 18:14:53 crc kubenswrapper[4858]: I0202 18:14:53.207243 4858 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ae31fff3-88d5-48ac-8d5a-b732e04c158b-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 02 18:14:54 crc kubenswrapper[4858]: I0202 18:14:54.074215 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jbbrv" podUID="822f4e7a-8717-4be6-9489-d6b7761c4590" containerName="registry-server" containerID="cri-o://e0819858cffbfcafb560dd87cd3f4da238e935109594467f3ab30d2ed708b770" gracePeriod=2 Feb 02 18:14:54 crc kubenswrapper[4858]: I0202 18:14:54.422526 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae31fff3-88d5-48ac-8d5a-b732e04c158b" path="/var/lib/kubelet/pods/ae31fff3-88d5-48ac-8d5a-b732e04c158b/volumes" Feb 02 18:14:54 crc kubenswrapper[4858]: I0202 18:14:54.598810 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jbbrv" Feb 02 18:14:54 crc kubenswrapper[4858]: I0202 18:14:54.738376 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/822f4e7a-8717-4be6-9489-d6b7761c4590-utilities\") pod \"822f4e7a-8717-4be6-9489-d6b7761c4590\" (UID: \"822f4e7a-8717-4be6-9489-d6b7761c4590\") " Feb 02 18:14:54 crc kubenswrapper[4858]: I0202 18:14:54.738509 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzqll\" (UniqueName: \"kubernetes.io/projected/822f4e7a-8717-4be6-9489-d6b7761c4590-kube-api-access-zzqll\") pod \"822f4e7a-8717-4be6-9489-d6b7761c4590\" (UID: \"822f4e7a-8717-4be6-9489-d6b7761c4590\") " Feb 02 18:14:54 crc kubenswrapper[4858]: I0202 18:14:54.738555 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/822f4e7a-8717-4be6-9489-d6b7761c4590-catalog-content\") pod \"822f4e7a-8717-4be6-9489-d6b7761c4590\" (UID: \"822f4e7a-8717-4be6-9489-d6b7761c4590\") " Feb 02 18:14:54 crc kubenswrapper[4858]: I0202 18:14:54.740073 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/822f4e7a-8717-4be6-9489-d6b7761c4590-utilities" (OuterVolumeSpecName: "utilities") pod "822f4e7a-8717-4be6-9489-d6b7761c4590" (UID: "822f4e7a-8717-4be6-9489-d6b7761c4590"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 18:14:54 crc kubenswrapper[4858]: I0202 18:14:54.747644 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/822f4e7a-8717-4be6-9489-d6b7761c4590-kube-api-access-zzqll" (OuterVolumeSpecName: "kube-api-access-zzqll") pod "822f4e7a-8717-4be6-9489-d6b7761c4590" (UID: "822f4e7a-8717-4be6-9489-d6b7761c4590"). InnerVolumeSpecName "kube-api-access-zzqll". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 18:14:54 crc kubenswrapper[4858]: I0202 18:14:54.841393 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/822f4e7a-8717-4be6-9489-d6b7761c4590-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 18:14:54 crc kubenswrapper[4858]: I0202 18:14:54.841457 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzqll\" (UniqueName: \"kubernetes.io/projected/822f4e7a-8717-4be6-9489-d6b7761c4590-kube-api-access-zzqll\") on node \"crc\" DevicePath \"\"" Feb 02 18:14:55 crc kubenswrapper[4858]: I0202 18:14:55.085598 4858 generic.go:334] "Generic (PLEG): container finished" podID="822f4e7a-8717-4be6-9489-d6b7761c4590" containerID="e0819858cffbfcafb560dd87cd3f4da238e935109594467f3ab30d2ed708b770" exitCode=0 Feb 02 18:14:55 crc kubenswrapper[4858]: I0202 18:14:55.085642 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jbbrv" event={"ID":"822f4e7a-8717-4be6-9489-d6b7761c4590","Type":"ContainerDied","Data":"e0819858cffbfcafb560dd87cd3f4da238e935109594467f3ab30d2ed708b770"} Feb 02 18:14:55 crc kubenswrapper[4858]: I0202 18:14:55.085675 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jbbrv" event={"ID":"822f4e7a-8717-4be6-9489-d6b7761c4590","Type":"ContainerDied","Data":"dab12c700bec44d4440f2050f0a88848af18fff9904294c1a24f78553c51672d"} Feb 02 18:14:55 crc kubenswrapper[4858]: I0202 18:14:55.085697 4858 scope.go:117] "RemoveContainer" containerID="e0819858cffbfcafb560dd87cd3f4da238e935109594467f3ab30d2ed708b770" Feb 02 18:14:55 crc kubenswrapper[4858]: I0202 18:14:55.085853 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jbbrv" Feb 02 18:14:55 crc kubenswrapper[4858]: I0202 18:14:55.113029 4858 scope.go:117] "RemoveContainer" containerID="e29b7958d3a953ac31a960e06c92656b5cd25e5ba2d32dac1d2e193505a1f84f" Feb 02 18:14:55 crc kubenswrapper[4858]: I0202 18:14:55.145179 4858 scope.go:117] "RemoveContainer" containerID="82c579826d5b6fc66fb962a48b7d17e95e22c0626555a23fe2825b2dc4fe2535" Feb 02 18:14:55 crc kubenswrapper[4858]: I0202 18:14:55.193236 4858 scope.go:117] "RemoveContainer" containerID="e0819858cffbfcafb560dd87cd3f4da238e935109594467f3ab30d2ed708b770" Feb 02 18:14:55 crc kubenswrapper[4858]: E0202 18:14:55.193707 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0819858cffbfcafb560dd87cd3f4da238e935109594467f3ab30d2ed708b770\": container with ID starting with e0819858cffbfcafb560dd87cd3f4da238e935109594467f3ab30d2ed708b770 not found: ID does not exist" containerID="e0819858cffbfcafb560dd87cd3f4da238e935109594467f3ab30d2ed708b770" Feb 02 18:14:55 crc kubenswrapper[4858]: I0202 18:14:55.193740 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0819858cffbfcafb560dd87cd3f4da238e935109594467f3ab30d2ed708b770"} err="failed to get container status \"e0819858cffbfcafb560dd87cd3f4da238e935109594467f3ab30d2ed708b770\": rpc error: code = NotFound desc = could not find container \"e0819858cffbfcafb560dd87cd3f4da238e935109594467f3ab30d2ed708b770\": container with ID starting with e0819858cffbfcafb560dd87cd3f4da238e935109594467f3ab30d2ed708b770 not found: ID does not exist" Feb 02 18:14:55 crc kubenswrapper[4858]: I0202 18:14:55.193764 4858 scope.go:117] "RemoveContainer" containerID="e29b7958d3a953ac31a960e06c92656b5cd25e5ba2d32dac1d2e193505a1f84f" Feb 02 18:14:55 crc kubenswrapper[4858]: E0202 18:14:55.194088 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e29b7958d3a953ac31a960e06c92656b5cd25e5ba2d32dac1d2e193505a1f84f\": container with ID starting with e29b7958d3a953ac31a960e06c92656b5cd25e5ba2d32dac1d2e193505a1f84f not found: ID does not exist" containerID="e29b7958d3a953ac31a960e06c92656b5cd25e5ba2d32dac1d2e193505a1f84f" Feb 02 18:14:55 crc kubenswrapper[4858]: I0202 18:14:55.194116 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e29b7958d3a953ac31a960e06c92656b5cd25e5ba2d32dac1d2e193505a1f84f"} err="failed to get container status \"e29b7958d3a953ac31a960e06c92656b5cd25e5ba2d32dac1d2e193505a1f84f\": rpc error: code = NotFound desc = could not find container \"e29b7958d3a953ac31a960e06c92656b5cd25e5ba2d32dac1d2e193505a1f84f\": container with ID starting with e29b7958d3a953ac31a960e06c92656b5cd25e5ba2d32dac1d2e193505a1f84f not found: ID does not exist" Feb 02 18:14:55 crc kubenswrapper[4858]: I0202 18:14:55.194133 4858 scope.go:117] "RemoveContainer" containerID="82c579826d5b6fc66fb962a48b7d17e95e22c0626555a23fe2825b2dc4fe2535" Feb 02 18:14:55 crc kubenswrapper[4858]: E0202 18:14:55.194464 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82c579826d5b6fc66fb962a48b7d17e95e22c0626555a23fe2825b2dc4fe2535\": container with ID starting with 82c579826d5b6fc66fb962a48b7d17e95e22c0626555a23fe2825b2dc4fe2535 not found: ID does not exist" containerID="82c579826d5b6fc66fb962a48b7d17e95e22c0626555a23fe2825b2dc4fe2535" Feb 02 18:14:55 crc kubenswrapper[4858]: I0202 18:14:55.194509 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82c579826d5b6fc66fb962a48b7d17e95e22c0626555a23fe2825b2dc4fe2535"} err="failed to get container status \"82c579826d5b6fc66fb962a48b7d17e95e22c0626555a23fe2825b2dc4fe2535\": rpc error: code = NotFound desc = could not find container \"82c579826d5b6fc66fb962a48b7d17e95e22c0626555a23fe2825b2dc4fe2535\": container with ID starting with 82c579826d5b6fc66fb962a48b7d17e95e22c0626555a23fe2825b2dc4fe2535 not found: ID does not exist" Feb 02 18:14:56 crc kubenswrapper[4858]: I0202 18:14:56.407521 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/822f4e7a-8717-4be6-9489-d6b7761c4590-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "822f4e7a-8717-4be6-9489-d6b7761c4590" (UID: "822f4e7a-8717-4be6-9489-d6b7761c4590"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 18:14:56 crc kubenswrapper[4858]: I0202 18:14:56.472619 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/822f4e7a-8717-4be6-9489-d6b7761c4590-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 18:14:56 crc kubenswrapper[4858]: I0202 18:14:56.609204 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jbbrv"] Feb 02 18:14:56 crc kubenswrapper[4858]: I0202 18:14:56.618514 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jbbrv"] Feb 02 18:14:57 crc kubenswrapper[4858]: I0202 18:14:57.807437 4858 patch_prober.go:28] interesting pod/machine-config-daemon-lbvl2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 18:14:57 crc kubenswrapper[4858]: I0202 18:14:57.807774 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 18:14:58 crc kubenswrapper[4858]: I0202 18:14:58.412912 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="822f4e7a-8717-4be6-9489-d6b7761c4590" path="/var/lib/kubelet/pods/822f4e7a-8717-4be6-9489-d6b7761c4590/volumes" Feb 02 18:15:00 crc kubenswrapper[4858]: I0202 18:15:00.168873 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500935-4w96k"] Feb 02 18:15:00 crc kubenswrapper[4858]: E0202 18:15:00.178889 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae31fff3-88d5-48ac-8d5a-b732e04c158b" containerName="copy" Feb 02 18:15:00 crc kubenswrapper[4858]: I0202 18:15:00.179124 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae31fff3-88d5-48ac-8d5a-b732e04c158b" containerName="copy" Feb 02 18:15:00 crc kubenswrapper[4858]: E0202 18:15:00.179208 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae31fff3-88d5-48ac-8d5a-b732e04c158b" containerName="gather" Feb 02 18:15:00 crc kubenswrapper[4858]: I0202 18:15:00.179269 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae31fff3-88d5-48ac-8d5a-b732e04c158b" containerName="gather" Feb 02 18:15:00 crc kubenswrapper[4858]: E0202 18:15:00.179353 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="822f4e7a-8717-4be6-9489-d6b7761c4590" containerName="extract-content" Feb 02 18:15:00 crc kubenswrapper[4858]: I0202 18:15:00.179419 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="822f4e7a-8717-4be6-9489-d6b7761c4590" containerName="extract-content" Feb 02 18:15:00 crc kubenswrapper[4858]: E0202 18:15:00.179482 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="822f4e7a-8717-4be6-9489-d6b7761c4590" containerName="registry-server" Feb 02 18:15:00 crc kubenswrapper[4858]: I0202 18:15:00.179539 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="822f4e7a-8717-4be6-9489-d6b7761c4590" containerName="registry-server" Feb 02 18:15:00 crc kubenswrapper[4858]: E0202 18:15:00.179651 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="822f4e7a-8717-4be6-9489-d6b7761c4590" containerName="extract-utilities" Feb 02 18:15:00 crc kubenswrapper[4858]: I0202 18:15:00.179736 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="822f4e7a-8717-4be6-9489-d6b7761c4590" containerName="extract-utilities" Feb 02 18:15:00 crc kubenswrapper[4858]: I0202 18:15:00.180067 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae31fff3-88d5-48ac-8d5a-b732e04c158b" containerName="copy" Feb 02 18:15:00 crc kubenswrapper[4858]: I0202 18:15:00.180203 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="822f4e7a-8717-4be6-9489-d6b7761c4590" containerName="registry-server" Feb 02 18:15:00 crc kubenswrapper[4858]: I0202 18:15:00.180302 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae31fff3-88d5-48ac-8d5a-b732e04c158b" containerName="gather" Feb 02 18:15:00 crc kubenswrapper[4858]: I0202 18:15:00.181085 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500935-4w96k"] Feb 02 18:15:00 crc kubenswrapper[4858]: I0202 18:15:00.181256 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500935-4w96k" Feb 02 18:15:00 crc kubenswrapper[4858]: I0202 18:15:00.199547 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 18:15:00 crc kubenswrapper[4858]: I0202 18:15:00.199883 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 18:15:00 crc kubenswrapper[4858]: I0202 18:15:00.242475 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/30403c4a-b237-4a53-b0bf-f61b96aff0c4-secret-volume\") pod \"collect-profiles-29500935-4w96k\" (UID: \"30403c4a-b237-4a53-b0bf-f61b96aff0c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500935-4w96k" Feb 02 18:15:00 crc kubenswrapper[4858]: I0202 18:15:00.242831 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfhx9\" (UniqueName: \"kubernetes.io/projected/30403c4a-b237-4a53-b0bf-f61b96aff0c4-kube-api-access-vfhx9\") pod \"collect-profiles-29500935-4w96k\" (UID: \"30403c4a-b237-4a53-b0bf-f61b96aff0c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500935-4w96k" Feb 02 18:15:00 crc kubenswrapper[4858]: I0202 18:15:00.243063 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30403c4a-b237-4a53-b0bf-f61b96aff0c4-config-volume\") pod \"collect-profiles-29500935-4w96k\" (UID: \"30403c4a-b237-4a53-b0bf-f61b96aff0c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500935-4w96k" Feb 02 18:15:00 crc kubenswrapper[4858]: I0202 18:15:00.345424 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30403c4a-b237-4a53-b0bf-f61b96aff0c4-config-volume\") pod \"collect-profiles-29500935-4w96k\" (UID: \"30403c4a-b237-4a53-b0bf-f61b96aff0c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500935-4w96k" Feb 02 18:15:00 crc kubenswrapper[4858]: I0202 18:15:00.345587 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/30403c4a-b237-4a53-b0bf-f61b96aff0c4-secret-volume\") pod \"collect-profiles-29500935-4w96k\" (UID: \"30403c4a-b237-4a53-b0bf-f61b96aff0c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500935-4w96k" Feb 02 18:15:00 crc kubenswrapper[4858]: I0202 18:15:00.345679 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfhx9\" (UniqueName: \"kubernetes.io/projected/30403c4a-b237-4a53-b0bf-f61b96aff0c4-kube-api-access-vfhx9\") pod \"collect-profiles-29500935-4w96k\" (UID: \"30403c4a-b237-4a53-b0bf-f61b96aff0c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500935-4w96k" Feb 02 18:15:00 crc kubenswrapper[4858]: I0202 18:15:00.347513 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 18:15:00 crc kubenswrapper[4858]: I0202 18:15:00.353887 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/30403c4a-b237-4a53-b0bf-f61b96aff0c4-secret-volume\") pod \"collect-profiles-29500935-4w96k\" (UID: \"30403c4a-b237-4a53-b0bf-f61b96aff0c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500935-4w96k" Feb 02 18:15:00 crc kubenswrapper[4858]: I0202 18:15:00.356746 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30403c4a-b237-4a53-b0bf-f61b96aff0c4-config-volume\") pod \"collect-profiles-29500935-4w96k\" (UID: \"30403c4a-b237-4a53-b0bf-f61b96aff0c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500935-4w96k" Feb 02 18:15:00 crc kubenswrapper[4858]: I0202 18:15:00.365582 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfhx9\" (UniqueName: \"kubernetes.io/projected/30403c4a-b237-4a53-b0bf-f61b96aff0c4-kube-api-access-vfhx9\") pod \"collect-profiles-29500935-4w96k\" (UID: \"30403c4a-b237-4a53-b0bf-f61b96aff0c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500935-4w96k" Feb 02 18:15:00 crc kubenswrapper[4858]: I0202 18:15:00.533916 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 18:15:00 crc kubenswrapper[4858]: I0202 18:15:00.542410 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500935-4w96k" Feb 02 18:15:00 crc kubenswrapper[4858]: I0202 18:15:00.998079 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500935-4w96k"] Feb 02 18:15:01 crc kubenswrapper[4858]: I0202 18:15:01.155502 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500935-4w96k" event={"ID":"30403c4a-b237-4a53-b0bf-f61b96aff0c4","Type":"ContainerStarted","Data":"b85d1f35d0fb9521191c8b9d2235489ebb3c32b60e6f899966733b1ca10b67d7"} Feb 02 18:15:02 crc kubenswrapper[4858]: I0202 18:15:02.165292 4858 generic.go:334] "Generic (PLEG): container finished" podID="30403c4a-b237-4a53-b0bf-f61b96aff0c4" containerID="1162ee11e8f976c59e53e839179173e6145ebe8b8c3e4f7c36bb68c36cbc745f" exitCode=0 Feb 02 18:15:02 crc kubenswrapper[4858]: I0202 18:15:02.165348 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500935-4w96k" event={"ID":"30403c4a-b237-4a53-b0bf-f61b96aff0c4","Type":"ContainerDied","Data":"1162ee11e8f976c59e53e839179173e6145ebe8b8c3e4f7c36bb68c36cbc745f"} Feb 02 18:15:03 crc kubenswrapper[4858]: I0202 18:15:03.518335 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500935-4w96k" Feb 02 18:15:03 crc kubenswrapper[4858]: I0202 18:15:03.610867 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/30403c4a-b237-4a53-b0bf-f61b96aff0c4-secret-volume\") pod \"30403c4a-b237-4a53-b0bf-f61b96aff0c4\" (UID: \"30403c4a-b237-4a53-b0bf-f61b96aff0c4\") " Feb 02 18:15:03 crc kubenswrapper[4858]: I0202 18:15:03.611068 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfhx9\" (UniqueName: \"kubernetes.io/projected/30403c4a-b237-4a53-b0bf-f61b96aff0c4-kube-api-access-vfhx9\") pod \"30403c4a-b237-4a53-b0bf-f61b96aff0c4\" (UID: \"30403c4a-b237-4a53-b0bf-f61b96aff0c4\") " Feb 02 18:15:03 crc kubenswrapper[4858]: I0202 18:15:03.611203 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30403c4a-b237-4a53-b0bf-f61b96aff0c4-config-volume\") pod \"30403c4a-b237-4a53-b0bf-f61b96aff0c4\" (UID: \"30403c4a-b237-4a53-b0bf-f61b96aff0c4\") " Feb 02 18:15:03 crc kubenswrapper[4858]: I0202 18:15:03.611882 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30403c4a-b237-4a53-b0bf-f61b96aff0c4-config-volume" (OuterVolumeSpecName: "config-volume") pod "30403c4a-b237-4a53-b0bf-f61b96aff0c4" (UID: "30403c4a-b237-4a53-b0bf-f61b96aff0c4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 18:15:03 crc kubenswrapper[4858]: I0202 18:15:03.617091 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30403c4a-b237-4a53-b0bf-f61b96aff0c4-kube-api-access-vfhx9" (OuterVolumeSpecName: "kube-api-access-vfhx9") pod "30403c4a-b237-4a53-b0bf-f61b96aff0c4" (UID: "30403c4a-b237-4a53-b0bf-f61b96aff0c4"). InnerVolumeSpecName "kube-api-access-vfhx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 18:15:03 crc kubenswrapper[4858]: I0202 18:15:03.617821 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30403c4a-b237-4a53-b0bf-f61b96aff0c4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "30403c4a-b237-4a53-b0bf-f61b96aff0c4" (UID: "30403c4a-b237-4a53-b0bf-f61b96aff0c4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 18:15:03 crc kubenswrapper[4858]: I0202 18:15:03.713768 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30403c4a-b237-4a53-b0bf-f61b96aff0c4-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 18:15:03 crc kubenswrapper[4858]: I0202 18:15:03.713810 4858 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/30403c4a-b237-4a53-b0bf-f61b96aff0c4-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 18:15:03 crc kubenswrapper[4858]: I0202 18:15:03.713820 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfhx9\" (UniqueName: \"kubernetes.io/projected/30403c4a-b237-4a53-b0bf-f61b96aff0c4-kube-api-access-vfhx9\") on node \"crc\" DevicePath \"\"" Feb 02 18:15:04 crc kubenswrapper[4858]: I0202 18:15:04.188513 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500935-4w96k" event={"ID":"30403c4a-b237-4a53-b0bf-f61b96aff0c4","Type":"ContainerDied","Data":"b85d1f35d0fb9521191c8b9d2235489ebb3c32b60e6f899966733b1ca10b67d7"} Feb 02 18:15:04 crc kubenswrapper[4858]: I0202 18:15:04.188552 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500935-4w96k" Feb 02 18:15:04 crc kubenswrapper[4858]: I0202 18:15:04.188554 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b85d1f35d0fb9521191c8b9d2235489ebb3c32b60e6f899966733b1ca10b67d7" Feb 02 18:15:04 crc kubenswrapper[4858]: I0202 18:15:04.581281 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500890-w9tc9"] Feb 02 18:15:04 crc kubenswrapper[4858]: I0202 18:15:04.588549 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500890-w9tc9"] Feb 02 18:15:06 crc kubenswrapper[4858]: I0202 18:15:06.413385 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="529cc58f-54e5-420c-8278-4e015207275f" path="/var/lib/kubelet/pods/529cc58f-54e5-420c-8278-4e015207275f/volumes" Feb 02 18:15:12 crc kubenswrapper[4858]: I0202 18:15:12.030158 4858 scope.go:117] "RemoveContainer" containerID="9bbf99522e64265ffc60b34c05d71d87b7ada7cd9b7fc79f7ce7a59c6065478c" Feb 02 18:15:27 crc kubenswrapper[4858]: I0202 18:15:27.807316 4858 patch_prober.go:28] interesting pod/machine-config-daemon-lbvl2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 18:15:27 crc kubenswrapper[4858]: I0202 18:15:27.808989 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 18:15:27 crc kubenswrapper[4858]: I0202 18:15:27.809138 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" Feb 02 18:15:27 crc kubenswrapper[4858]: I0202 18:15:27.810016 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0407b826940258d2b90ce9df3d656cf4cd038bfd8d47c76fe3dfc58f88e9b7c6"} pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 18:15:27 crc kubenswrapper[4858]: I0202 18:15:27.810186 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" containerID="cri-o://0407b826940258d2b90ce9df3d656cf4cd038bfd8d47c76fe3dfc58f88e9b7c6" gracePeriod=600 Feb 02 18:15:28 crc kubenswrapper[4858]: I0202 18:15:28.404596 4858 generic.go:334] "Generic (PLEG): container finished" podID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerID="0407b826940258d2b90ce9df3d656cf4cd038bfd8d47c76fe3dfc58f88e9b7c6" exitCode=0 Feb 02 18:15:28 crc kubenswrapper[4858]: I0202 18:15:28.414403 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" event={"ID":"d03a4872-ca6a-4233-bdbf-b31f7890dc3e","Type":"ContainerDied","Data":"0407b826940258d2b90ce9df3d656cf4cd038bfd8d47c76fe3dfc58f88e9b7c6"} Feb 02 18:15:28 crc kubenswrapper[4858]: I0202 18:15:28.414743 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" event={"ID":"d03a4872-ca6a-4233-bdbf-b31f7890dc3e","Type":"ContainerStarted","Data":"2dceb4d9313d305ccb6d7b2e00d45b1d22fccca33728887d2fdd371ccc924f5c"} Feb 02 18:15:28 crc kubenswrapper[4858]: I0202 18:15:28.414830 4858 scope.go:117] "RemoveContainer" containerID="a5e81828dcdebe8f323750ad14d8ce95e7cafffc8c6a9eb621ef0306363cb195" Feb 02 18:16:12 crc kubenswrapper[4858]: I0202 18:16:12.111128 4858 scope.go:117] "RemoveContainer" containerID="0d7a1f687d71e8fb6a2f29239b0c478f9499d2ca05293e5b0a8c36022f97ae9d" Feb 02 18:17:12 crc kubenswrapper[4858]: I0202 18:17:12.171147 4858 scope.go:117] "RemoveContainer" containerID="fc73e921135560ce00204e1a53b1e2fed3413e85c368975d5386c22e3d740dd2" Feb 02 18:17:48 crc kubenswrapper[4858]: I0202 18:17:48.241553 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4ln74/must-gather-rtvv9"] Feb 02 18:17:48 crc kubenswrapper[4858]: E0202 18:17:48.242544 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30403c4a-b237-4a53-b0bf-f61b96aff0c4" containerName="collect-profiles" Feb 02 18:17:48 crc kubenswrapper[4858]: I0202 18:17:48.242561 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="30403c4a-b237-4a53-b0bf-f61b96aff0c4" containerName="collect-profiles" Feb 02 18:17:48 crc kubenswrapper[4858]: I0202 18:17:48.242778 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="30403c4a-b237-4a53-b0bf-f61b96aff0c4" containerName="collect-profiles" Feb 02 18:17:48 crc kubenswrapper[4858]: I0202 18:17:48.243888 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4ln74/must-gather-rtvv9" Feb 02 18:17:48 crc kubenswrapper[4858]: I0202 18:17:48.245456 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-4ln74"/"default-dockercfg-6xlqr" Feb 02 18:17:48 crc kubenswrapper[4858]: I0202 18:17:48.247292 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-4ln74"/"openshift-service-ca.crt" Feb 02 18:17:48 crc kubenswrapper[4858]: I0202 18:17:48.247302 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-4ln74"/"kube-root-ca.crt" Feb 02 18:17:48 crc kubenswrapper[4858]: I0202 18:17:48.254376 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-4ln74/must-gather-rtvv9"] Feb 02 18:17:48 crc kubenswrapper[4858]: I0202 18:17:48.404216 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f2eb319c-5edd-4a70-a7d6-4c295048bbed-must-gather-output\") pod \"must-gather-rtvv9\" (UID: \"f2eb319c-5edd-4a70-a7d6-4c295048bbed\") " pod="openshift-must-gather-4ln74/must-gather-rtvv9" Feb 02 18:17:48 crc kubenswrapper[4858]: I0202 18:17:48.404392 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l56n\" (UniqueName: \"kubernetes.io/projected/f2eb319c-5edd-4a70-a7d6-4c295048bbed-kube-api-access-9l56n\") pod \"must-gather-rtvv9\" (UID: \"f2eb319c-5edd-4a70-a7d6-4c295048bbed\") " pod="openshift-must-gather-4ln74/must-gather-rtvv9" Feb 02 18:17:48 crc kubenswrapper[4858]: I0202 18:17:48.506414 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f2eb319c-5edd-4a70-a7d6-4c295048bbed-must-gather-output\") pod \"must-gather-rtvv9\" (UID: \"f2eb319c-5edd-4a70-a7d6-4c295048bbed\") " pod="openshift-must-gather-4ln74/must-gather-rtvv9" Feb 02 18:17:48 crc kubenswrapper[4858]: I0202 18:17:48.506591 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l56n\" (UniqueName: \"kubernetes.io/projected/f2eb319c-5edd-4a70-a7d6-4c295048bbed-kube-api-access-9l56n\") pod \"must-gather-rtvv9\" (UID: \"f2eb319c-5edd-4a70-a7d6-4c295048bbed\") " pod="openshift-must-gather-4ln74/must-gather-rtvv9" Feb 02 18:17:48 crc kubenswrapper[4858]: I0202 18:17:48.506854 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f2eb319c-5edd-4a70-a7d6-4c295048bbed-must-gather-output\") pod \"must-gather-rtvv9\" (UID: \"f2eb319c-5edd-4a70-a7d6-4c295048bbed\") " pod="openshift-must-gather-4ln74/must-gather-rtvv9" Feb 02 18:17:48 crc kubenswrapper[4858]: I0202 18:17:48.526600 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l56n\" (UniqueName: \"kubernetes.io/projected/f2eb319c-5edd-4a70-a7d6-4c295048bbed-kube-api-access-9l56n\") pod \"must-gather-rtvv9\" (UID: \"f2eb319c-5edd-4a70-a7d6-4c295048bbed\") " pod="openshift-must-gather-4ln74/must-gather-rtvv9" Feb 02 18:17:48 crc kubenswrapper[4858]: I0202 18:17:48.562688 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4ln74/must-gather-rtvv9" Feb 02 18:17:49 crc kubenswrapper[4858]: I0202 18:17:49.045198 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-4ln74/must-gather-rtvv9"] Feb 02 18:17:49 crc kubenswrapper[4858]: I0202 18:17:49.640418 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4ln74/must-gather-rtvv9" event={"ID":"f2eb319c-5edd-4a70-a7d6-4c295048bbed","Type":"ContainerStarted","Data":"154bb6b1fde0089f229ab6e3319ef34257ae9b87b23537bd473ba60ce2c3f71b"} Feb 02 18:17:49 crc kubenswrapper[4858]: I0202 18:17:49.640796 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4ln74/must-gather-rtvv9" event={"ID":"f2eb319c-5edd-4a70-a7d6-4c295048bbed","Type":"ContainerStarted","Data":"32e41c2432e845bea4b6fd4d8eb19aba1f33810192130dfb1bb3e4090df5326b"} Feb 02 18:17:49 crc kubenswrapper[4858]: I0202 18:17:49.640816 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4ln74/must-gather-rtvv9" event={"ID":"f2eb319c-5edd-4a70-a7d6-4c295048bbed","Type":"ContainerStarted","Data":"258b21f9e3d689e4b9b34a4c7e97fd44f4fc7f251c18093aa264ec31efd0746e"} Feb 02 18:17:49 crc kubenswrapper[4858]: I0202 18:17:49.666694 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-4ln74/must-gather-rtvv9" podStartSLOduration=1.666672637 podStartE2EDuration="1.666672637s" podCreationTimestamp="2026-02-02 18:17:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 18:17:49.656515276 +0000 UTC m=+3770.808930541" watchObservedRunningTime="2026-02-02 18:17:49.666672637 +0000 UTC m=+3770.819087902" Feb 02 18:17:52 crc kubenswrapper[4858]: I0202 18:17:52.889441 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4ln74/crc-debug-w5blz"] Feb 02 18:17:52 crc kubenswrapper[4858]: I0202 18:17:52.891395 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4ln74/crc-debug-w5blz" Feb 02 18:17:52 crc kubenswrapper[4858]: I0202 18:17:52.995337 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2d737025-7e41-4af9-a1d9-1ebbfddd8ddb-host\") pod \"crc-debug-w5blz\" (UID: \"2d737025-7e41-4af9-a1d9-1ebbfddd8ddb\") " pod="openshift-must-gather-4ln74/crc-debug-w5blz" Feb 02 18:17:52 crc kubenswrapper[4858]: I0202 18:17:52.995421 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8jcx\" (UniqueName: \"kubernetes.io/projected/2d737025-7e41-4af9-a1d9-1ebbfddd8ddb-kube-api-access-q8jcx\") pod \"crc-debug-w5blz\" (UID: \"2d737025-7e41-4af9-a1d9-1ebbfddd8ddb\") " pod="openshift-must-gather-4ln74/crc-debug-w5blz" Feb 02 18:17:53 crc kubenswrapper[4858]: I0202 18:17:53.097174 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2d737025-7e41-4af9-a1d9-1ebbfddd8ddb-host\") pod \"crc-debug-w5blz\" (UID: \"2d737025-7e41-4af9-a1d9-1ebbfddd8ddb\") " pod="openshift-must-gather-4ln74/crc-debug-w5blz" Feb 02 18:17:53 crc kubenswrapper[4858]: I0202 18:17:53.097243 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8jcx\" (UniqueName: \"kubernetes.io/projected/2d737025-7e41-4af9-a1d9-1ebbfddd8ddb-kube-api-access-q8jcx\") pod \"crc-debug-w5blz\" (UID: \"2d737025-7e41-4af9-a1d9-1ebbfddd8ddb\") " pod="openshift-must-gather-4ln74/crc-debug-w5blz" Feb 02 18:17:53 crc kubenswrapper[4858]: I0202 18:17:53.097367 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2d737025-7e41-4af9-a1d9-1ebbfddd8ddb-host\") pod \"crc-debug-w5blz\" (UID: \"2d737025-7e41-4af9-a1d9-1ebbfddd8ddb\") " pod="openshift-must-gather-4ln74/crc-debug-w5blz" Feb 02 18:17:53 crc kubenswrapper[4858]: I0202 18:17:53.118544 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8jcx\" (UniqueName: \"kubernetes.io/projected/2d737025-7e41-4af9-a1d9-1ebbfddd8ddb-kube-api-access-q8jcx\") pod \"crc-debug-w5blz\" (UID: \"2d737025-7e41-4af9-a1d9-1ebbfddd8ddb\") " pod="openshift-must-gather-4ln74/crc-debug-w5blz" Feb 02 18:17:53 crc kubenswrapper[4858]: I0202 18:17:53.215297 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4ln74/crc-debug-w5blz" Feb 02 18:17:53 crc kubenswrapper[4858]: W0202 18:17:53.257057 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d737025_7e41_4af9_a1d9_1ebbfddd8ddb.slice/crio-215155b5703d2f037273a231d4bfb839de3fc2bcf56a43f71fe5b1b22c60ce69 WatchSource:0}: Error finding container 215155b5703d2f037273a231d4bfb839de3fc2bcf56a43f71fe5b1b22c60ce69: Status 404 returned error can't find the container with id 215155b5703d2f037273a231d4bfb839de3fc2bcf56a43f71fe5b1b22c60ce69 Feb 02 18:17:53 crc kubenswrapper[4858]: I0202 18:17:53.676843 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4ln74/crc-debug-w5blz" event={"ID":"2d737025-7e41-4af9-a1d9-1ebbfddd8ddb","Type":"ContainerStarted","Data":"c36bcc2e87ba5ebeb0b93c62f27571adea2297acc3d96b66a0145c0d7d5b37ef"} Feb 02 18:17:53 crc kubenswrapper[4858]: I0202 18:17:53.677159 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4ln74/crc-debug-w5blz" event={"ID":"2d737025-7e41-4af9-a1d9-1ebbfddd8ddb","Type":"ContainerStarted","Data":"215155b5703d2f037273a231d4bfb839de3fc2bcf56a43f71fe5b1b22c60ce69"} Feb 02 18:17:53 crc kubenswrapper[4858]: I0202 18:17:53.694775 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-4ln74/crc-debug-w5blz" podStartSLOduration=1.6947580759999998 podStartE2EDuration="1.694758076s" podCreationTimestamp="2026-02-02 18:17:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 18:17:53.691507303 +0000 UTC m=+3774.843922578" watchObservedRunningTime="2026-02-02 18:17:53.694758076 +0000 UTC m=+3774.847173341" Feb 02 18:17:57 crc kubenswrapper[4858]: I0202 18:17:57.808219 4858 patch_prober.go:28] interesting pod/machine-config-daemon-lbvl2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 18:17:57 crc kubenswrapper[4858]: I0202 18:17:57.808855 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 18:18:27 crc kubenswrapper[4858]: I0202 18:18:27.807297 4858 patch_prober.go:28] interesting pod/machine-config-daemon-lbvl2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 18:18:27 crc kubenswrapper[4858]: I0202 18:18:27.807931 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 18:18:28 crc kubenswrapper[4858]: I0202 18:18:28.001163 4858 generic.go:334] "Generic (PLEG): container finished" podID="2d737025-7e41-4af9-a1d9-1ebbfddd8ddb" containerID="c36bcc2e87ba5ebeb0b93c62f27571adea2297acc3d96b66a0145c0d7d5b37ef" exitCode=0 Feb 02 18:18:28 crc kubenswrapper[4858]: I0202 18:18:28.001232 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4ln74/crc-debug-w5blz" event={"ID":"2d737025-7e41-4af9-a1d9-1ebbfddd8ddb","Type":"ContainerDied","Data":"c36bcc2e87ba5ebeb0b93c62f27571adea2297acc3d96b66a0145c0d7d5b37ef"} Feb 02 18:18:29 crc kubenswrapper[4858]: I0202 18:18:29.126768 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4ln74/crc-debug-w5blz" Feb 02 18:18:29 crc kubenswrapper[4858]: I0202 18:18:29.168233 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4ln74/crc-debug-w5blz"] Feb 02 18:18:29 crc kubenswrapper[4858]: I0202 18:18:29.178029 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4ln74/crc-debug-w5blz"] Feb 02 18:18:29 crc kubenswrapper[4858]: I0202 18:18:29.252516 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8jcx\" (UniqueName: \"kubernetes.io/projected/2d737025-7e41-4af9-a1d9-1ebbfddd8ddb-kube-api-access-q8jcx\") pod \"2d737025-7e41-4af9-a1d9-1ebbfddd8ddb\" (UID: \"2d737025-7e41-4af9-a1d9-1ebbfddd8ddb\") " Feb 02 18:18:29 crc kubenswrapper[4858]: I0202 18:18:29.252959 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2d737025-7e41-4af9-a1d9-1ebbfddd8ddb-host\") pod \"2d737025-7e41-4af9-a1d9-1ebbfddd8ddb\" (UID: \"2d737025-7e41-4af9-a1d9-1ebbfddd8ddb\") " Feb 02 18:18:29 crc kubenswrapper[4858]: I0202 18:18:29.253107 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d737025-7e41-4af9-a1d9-1ebbfddd8ddb-host" (OuterVolumeSpecName: "host") pod "2d737025-7e41-4af9-a1d9-1ebbfddd8ddb" (UID: "2d737025-7e41-4af9-a1d9-1ebbfddd8ddb"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 18:18:29 crc kubenswrapper[4858]: I0202 18:18:29.253465 4858 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2d737025-7e41-4af9-a1d9-1ebbfddd8ddb-host\") on node \"crc\" DevicePath \"\"" Feb 02 18:18:29 crc kubenswrapper[4858]: I0202 18:18:29.259202 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d737025-7e41-4af9-a1d9-1ebbfddd8ddb-kube-api-access-q8jcx" (OuterVolumeSpecName: "kube-api-access-q8jcx") pod "2d737025-7e41-4af9-a1d9-1ebbfddd8ddb" (UID: "2d737025-7e41-4af9-a1d9-1ebbfddd8ddb"). InnerVolumeSpecName "kube-api-access-q8jcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 18:18:29 crc kubenswrapper[4858]: I0202 18:18:29.355193 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8jcx\" (UniqueName: \"kubernetes.io/projected/2d737025-7e41-4af9-a1d9-1ebbfddd8ddb-kube-api-access-q8jcx\") on node \"crc\" DevicePath \"\"" Feb 02 18:18:30 crc kubenswrapper[4858]: I0202 18:18:30.020791 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="215155b5703d2f037273a231d4bfb839de3fc2bcf56a43f71fe5b1b22c60ce69" Feb 02 18:18:30 crc kubenswrapper[4858]: I0202 18:18:30.020833 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4ln74/crc-debug-w5blz" Feb 02 18:18:30 crc kubenswrapper[4858]: I0202 18:18:30.416998 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d737025-7e41-4af9-a1d9-1ebbfddd8ddb" path="/var/lib/kubelet/pods/2d737025-7e41-4af9-a1d9-1ebbfddd8ddb/volumes" Feb 02 18:18:30 crc kubenswrapper[4858]: I0202 18:18:30.417585 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4ln74/crc-debug-bcgrc"] Feb 02 18:18:30 crc kubenswrapper[4858]: E0202 18:18:30.418267 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d737025-7e41-4af9-a1d9-1ebbfddd8ddb" containerName="container-00" Feb 02 18:18:30 crc kubenswrapper[4858]: I0202 18:18:30.418286 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d737025-7e41-4af9-a1d9-1ebbfddd8ddb" containerName="container-00" Feb 02 18:18:30 crc kubenswrapper[4858]: I0202 18:18:30.418520 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d737025-7e41-4af9-a1d9-1ebbfddd8ddb" containerName="container-00" Feb 02 18:18:30 crc kubenswrapper[4858]: I0202 18:18:30.419443 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4ln74/crc-debug-bcgrc" Feb 02 18:18:30 crc kubenswrapper[4858]: I0202 18:18:30.479161 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svkvq\" (UniqueName: \"kubernetes.io/projected/b54702f4-9335-44fd-aa94-d95f06cc879b-kube-api-access-svkvq\") pod \"crc-debug-bcgrc\" (UID: \"b54702f4-9335-44fd-aa94-d95f06cc879b\") " pod="openshift-must-gather-4ln74/crc-debug-bcgrc" Feb 02 18:18:30 crc kubenswrapper[4858]: I0202 18:18:30.479316 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b54702f4-9335-44fd-aa94-d95f06cc879b-host\") pod \"crc-debug-bcgrc\" (UID: \"b54702f4-9335-44fd-aa94-d95f06cc879b\") " pod="openshift-must-gather-4ln74/crc-debug-bcgrc" Feb 02 18:18:30 crc kubenswrapper[4858]: I0202 18:18:30.581057 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b54702f4-9335-44fd-aa94-d95f06cc879b-host\") pod \"crc-debug-bcgrc\" (UID: \"b54702f4-9335-44fd-aa94-d95f06cc879b\") " pod="openshift-must-gather-4ln74/crc-debug-bcgrc" Feb 02 18:18:30 crc kubenswrapper[4858]: I0202 18:18:30.581178 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b54702f4-9335-44fd-aa94-d95f06cc879b-host\") pod \"crc-debug-bcgrc\" (UID: \"b54702f4-9335-44fd-aa94-d95f06cc879b\") " pod="openshift-must-gather-4ln74/crc-debug-bcgrc" Feb 02 18:18:30 crc kubenswrapper[4858]: I0202 18:18:30.581373 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svkvq\" (UniqueName: \"kubernetes.io/projected/b54702f4-9335-44fd-aa94-d95f06cc879b-kube-api-access-svkvq\") pod \"crc-debug-bcgrc\" (UID: \"b54702f4-9335-44fd-aa94-d95f06cc879b\") " pod="openshift-must-gather-4ln74/crc-debug-bcgrc" Feb 02 18:18:30 crc kubenswrapper[4858]: I0202 18:18:30.601644 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svkvq\" (UniqueName: \"kubernetes.io/projected/b54702f4-9335-44fd-aa94-d95f06cc879b-kube-api-access-svkvq\") pod \"crc-debug-bcgrc\" (UID: \"b54702f4-9335-44fd-aa94-d95f06cc879b\") " pod="openshift-must-gather-4ln74/crc-debug-bcgrc" Feb 02 18:18:30 crc kubenswrapper[4858]: I0202 18:18:30.740129 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4ln74/crc-debug-bcgrc" Feb 02 18:18:30 crc kubenswrapper[4858]: W0202 18:18:30.775742 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb54702f4_9335_44fd_aa94_d95f06cc879b.slice/crio-53653d446ed8aa87fbef2045dc937c55300130bdb03482cfb844a87324dc1828 WatchSource:0}: Error finding container 53653d446ed8aa87fbef2045dc937c55300130bdb03482cfb844a87324dc1828: Status 404 returned error can't find the container with id 53653d446ed8aa87fbef2045dc937c55300130bdb03482cfb844a87324dc1828 Feb 02 18:18:31 crc kubenswrapper[4858]: I0202 18:18:31.033040 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4ln74/crc-debug-bcgrc" event={"ID":"b54702f4-9335-44fd-aa94-d95f06cc879b","Type":"ContainerStarted","Data":"93d39d782047cb73b5cc52fd984be3b410001501ff2fba408c17222fa0fea02f"} Feb 02 18:18:31 crc kubenswrapper[4858]: I0202 18:18:31.033104 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4ln74/crc-debug-bcgrc" event={"ID":"b54702f4-9335-44fd-aa94-d95f06cc879b","Type":"ContainerStarted","Data":"53653d446ed8aa87fbef2045dc937c55300130bdb03482cfb844a87324dc1828"} Feb 02 18:18:32 crc kubenswrapper[4858]: I0202 18:18:32.044560 4858 generic.go:334] "Generic (PLEG): container finished" podID="b54702f4-9335-44fd-aa94-d95f06cc879b" containerID="93d39d782047cb73b5cc52fd984be3b410001501ff2fba408c17222fa0fea02f" exitCode=0 Feb 02 18:18:32 crc kubenswrapper[4858]: I0202 18:18:32.044592 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4ln74/crc-debug-bcgrc" event={"ID":"b54702f4-9335-44fd-aa94-d95f06cc879b","Type":"ContainerDied","Data":"93d39d782047cb73b5cc52fd984be3b410001501ff2fba408c17222fa0fea02f"} Feb 02 18:18:33 crc kubenswrapper[4858]: I0202 18:18:33.150465 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4ln74/crc-debug-bcgrc" Feb 02 18:18:33 crc kubenswrapper[4858]: I0202 18:18:33.183363 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4ln74/crc-debug-bcgrc"] Feb 02 18:18:33 crc kubenswrapper[4858]: I0202 18:18:33.193769 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4ln74/crc-debug-bcgrc"] Feb 02 18:18:33 crc kubenswrapper[4858]: I0202 18:18:33.245724 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svkvq\" (UniqueName: \"kubernetes.io/projected/b54702f4-9335-44fd-aa94-d95f06cc879b-kube-api-access-svkvq\") pod \"b54702f4-9335-44fd-aa94-d95f06cc879b\" (UID: \"b54702f4-9335-44fd-aa94-d95f06cc879b\") " Feb 02 18:18:33 crc kubenswrapper[4858]: I0202 18:18:33.245792 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b54702f4-9335-44fd-aa94-d95f06cc879b-host\") pod \"b54702f4-9335-44fd-aa94-d95f06cc879b\" (UID: \"b54702f4-9335-44fd-aa94-d95f06cc879b\") " Feb 02 18:18:33 crc kubenswrapper[4858]: I0202 18:18:33.245986 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b54702f4-9335-44fd-aa94-d95f06cc879b-host" (OuterVolumeSpecName: "host") pod "b54702f4-9335-44fd-aa94-d95f06cc879b" (UID: "b54702f4-9335-44fd-aa94-d95f06cc879b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 18:18:33 crc kubenswrapper[4858]: I0202 18:18:33.246470 4858 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b54702f4-9335-44fd-aa94-d95f06cc879b-host\") on node \"crc\" DevicePath \"\"" Feb 02 18:18:33 crc kubenswrapper[4858]: I0202 18:18:33.251551 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b54702f4-9335-44fd-aa94-d95f06cc879b-kube-api-access-svkvq" (OuterVolumeSpecName: "kube-api-access-svkvq") pod "b54702f4-9335-44fd-aa94-d95f06cc879b" (UID: "b54702f4-9335-44fd-aa94-d95f06cc879b"). InnerVolumeSpecName "kube-api-access-svkvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 18:18:33 crc kubenswrapper[4858]: I0202 18:18:33.348315 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svkvq\" (UniqueName: \"kubernetes.io/projected/b54702f4-9335-44fd-aa94-d95f06cc879b-kube-api-access-svkvq\") on node \"crc\" DevicePath \"\"" Feb 02 18:18:34 crc kubenswrapper[4858]: I0202 18:18:34.064944 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53653d446ed8aa87fbef2045dc937c55300130bdb03482cfb844a87324dc1828" Feb 02 18:18:34 crc kubenswrapper[4858]: I0202 18:18:34.065096 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4ln74/crc-debug-bcgrc" Feb 02 18:18:34 crc kubenswrapper[4858]: I0202 18:18:34.367679 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4ln74/crc-debug-bhcpn"] Feb 02 18:18:34 crc kubenswrapper[4858]: E0202 18:18:34.368120 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b54702f4-9335-44fd-aa94-d95f06cc879b" containerName="container-00" Feb 02 18:18:34 crc kubenswrapper[4858]: I0202 18:18:34.368134 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b54702f4-9335-44fd-aa94-d95f06cc879b" containerName="container-00" Feb 02 18:18:34 crc kubenswrapper[4858]: I0202 18:18:34.368338 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="b54702f4-9335-44fd-aa94-d95f06cc879b" containerName="container-00" Feb 02 18:18:34 crc kubenswrapper[4858]: I0202 18:18:34.369072 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4ln74/crc-debug-bhcpn" Feb 02 18:18:34 crc kubenswrapper[4858]: I0202 18:18:34.413548 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b54702f4-9335-44fd-aa94-d95f06cc879b" path="/var/lib/kubelet/pods/b54702f4-9335-44fd-aa94-d95f06cc879b/volumes" Feb 02 18:18:34 crc kubenswrapper[4858]: I0202 18:18:34.473320 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/180305bc-fe2b-4a1f-ac36-5e9f44d89e8d-host\") pod \"crc-debug-bhcpn\" (UID: \"180305bc-fe2b-4a1f-ac36-5e9f44d89e8d\") " pod="openshift-must-gather-4ln74/crc-debug-bhcpn" Feb 02 18:18:34 crc kubenswrapper[4858]: I0202 18:18:34.473478 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb8lc\" (UniqueName: \"kubernetes.io/projected/180305bc-fe2b-4a1f-ac36-5e9f44d89e8d-kube-api-access-vb8lc\") pod \"crc-debug-bhcpn\" (UID: \"180305bc-fe2b-4a1f-ac36-5e9f44d89e8d\") " pod="openshift-must-gather-4ln74/crc-debug-bhcpn" Feb 02 18:18:34 crc kubenswrapper[4858]: I0202 18:18:34.575115 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/180305bc-fe2b-4a1f-ac36-5e9f44d89e8d-host\") pod \"crc-debug-bhcpn\" (UID: \"180305bc-fe2b-4a1f-ac36-5e9f44d89e8d\") " pod="openshift-must-gather-4ln74/crc-debug-bhcpn" Feb 02 18:18:34 crc kubenswrapper[4858]: I0202 18:18:34.575250 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb8lc\" (UniqueName: \"kubernetes.io/projected/180305bc-fe2b-4a1f-ac36-5e9f44d89e8d-kube-api-access-vb8lc\") pod \"crc-debug-bhcpn\" (UID: \"180305bc-fe2b-4a1f-ac36-5e9f44d89e8d\") " pod="openshift-must-gather-4ln74/crc-debug-bhcpn" Feb 02 18:18:34 crc kubenswrapper[4858]: I0202 18:18:34.575378 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/180305bc-fe2b-4a1f-ac36-5e9f44d89e8d-host\") pod \"crc-debug-bhcpn\" (UID: \"180305bc-fe2b-4a1f-ac36-5e9f44d89e8d\") " pod="openshift-must-gather-4ln74/crc-debug-bhcpn" Feb 02 18:18:34 crc kubenswrapper[4858]: I0202 18:18:34.597687 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb8lc\" (UniqueName: \"kubernetes.io/projected/180305bc-fe2b-4a1f-ac36-5e9f44d89e8d-kube-api-access-vb8lc\") pod \"crc-debug-bhcpn\" (UID: \"180305bc-fe2b-4a1f-ac36-5e9f44d89e8d\") " pod="openshift-must-gather-4ln74/crc-debug-bhcpn" Feb 02 18:18:34 crc kubenswrapper[4858]: I0202 18:18:34.696023 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4ln74/crc-debug-bhcpn" Feb 02 18:18:34 crc kubenswrapper[4858]: W0202 18:18:34.726209 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod180305bc_fe2b_4a1f_ac36_5e9f44d89e8d.slice/crio-863f47dfe53a1c8ba4ff6d5440c0a9ada5e1ad2d62929e6025b18ca098484db6 WatchSource:0}: Error finding container 863f47dfe53a1c8ba4ff6d5440c0a9ada5e1ad2d62929e6025b18ca098484db6: Status 404 returned error can't find the container with id 863f47dfe53a1c8ba4ff6d5440c0a9ada5e1ad2d62929e6025b18ca098484db6 Feb 02 18:18:35 crc kubenswrapper[4858]: I0202 18:18:35.078378 4858 generic.go:334] "Generic (PLEG): container finished" podID="180305bc-fe2b-4a1f-ac36-5e9f44d89e8d" containerID="fd65df52d957783fbd19cf5e7dfb9354838b1020d686468dceda39383a5c0955" exitCode=0 Feb 02 18:18:35 crc kubenswrapper[4858]: I0202 18:18:35.078527 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4ln74/crc-debug-bhcpn" event={"ID":"180305bc-fe2b-4a1f-ac36-5e9f44d89e8d","Type":"ContainerDied","Data":"fd65df52d957783fbd19cf5e7dfb9354838b1020d686468dceda39383a5c0955"} Feb 02 18:18:35 crc kubenswrapper[4858]: I0202 18:18:35.078723 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4ln74/crc-debug-bhcpn" event={"ID":"180305bc-fe2b-4a1f-ac36-5e9f44d89e8d","Type":"ContainerStarted","Data":"863f47dfe53a1c8ba4ff6d5440c0a9ada5e1ad2d62929e6025b18ca098484db6"} Feb 02 18:18:35 crc kubenswrapper[4858]: I0202 18:18:35.130917 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4ln74/crc-debug-bhcpn"] Feb 02 18:18:35 crc kubenswrapper[4858]: I0202 18:18:35.135205 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4ln74/crc-debug-bhcpn"] Feb 02 18:18:36 crc kubenswrapper[4858]: I0202 18:18:36.204042 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4ln74/crc-debug-bhcpn" Feb 02 18:18:36 crc kubenswrapper[4858]: I0202 18:18:36.314937 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/180305bc-fe2b-4a1f-ac36-5e9f44d89e8d-host\") pod \"180305bc-fe2b-4a1f-ac36-5e9f44d89e8d\" (UID: \"180305bc-fe2b-4a1f-ac36-5e9f44d89e8d\") " Feb 02 18:18:36 crc kubenswrapper[4858]: I0202 18:18:36.315080 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/180305bc-fe2b-4a1f-ac36-5e9f44d89e8d-host" (OuterVolumeSpecName: "host") pod "180305bc-fe2b-4a1f-ac36-5e9f44d89e8d" (UID: "180305bc-fe2b-4a1f-ac36-5e9f44d89e8d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 18:18:36 crc kubenswrapper[4858]: I0202 18:18:36.315208 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vb8lc\" (UniqueName: \"kubernetes.io/projected/180305bc-fe2b-4a1f-ac36-5e9f44d89e8d-kube-api-access-vb8lc\") pod \"180305bc-fe2b-4a1f-ac36-5e9f44d89e8d\" (UID: \"180305bc-fe2b-4a1f-ac36-5e9f44d89e8d\") " Feb 02 18:18:36 crc kubenswrapper[4858]: I0202 18:18:36.315852 4858 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/180305bc-fe2b-4a1f-ac36-5e9f44d89e8d-host\") on node \"crc\" DevicePath \"\"" Feb 02 18:18:36 crc kubenswrapper[4858]: I0202 18:18:36.320758 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/180305bc-fe2b-4a1f-ac36-5e9f44d89e8d-kube-api-access-vb8lc" (OuterVolumeSpecName: "kube-api-access-vb8lc") pod "180305bc-fe2b-4a1f-ac36-5e9f44d89e8d" (UID: "180305bc-fe2b-4a1f-ac36-5e9f44d89e8d"). InnerVolumeSpecName "kube-api-access-vb8lc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 18:18:36 crc kubenswrapper[4858]: I0202 18:18:36.411428 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="180305bc-fe2b-4a1f-ac36-5e9f44d89e8d" path="/var/lib/kubelet/pods/180305bc-fe2b-4a1f-ac36-5e9f44d89e8d/volumes" Feb 02 18:18:36 crc kubenswrapper[4858]: I0202 18:18:36.417595 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vb8lc\" (UniqueName: \"kubernetes.io/projected/180305bc-fe2b-4a1f-ac36-5e9f44d89e8d-kube-api-access-vb8lc\") on node \"crc\" DevicePath \"\"" Feb 02 18:18:37 crc kubenswrapper[4858]: I0202 18:18:37.096630 4858 scope.go:117] "RemoveContainer" containerID="fd65df52d957783fbd19cf5e7dfb9354838b1020d686468dceda39383a5c0955" Feb 02 18:18:37 crc kubenswrapper[4858]: I0202 18:18:37.096649 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4ln74/crc-debug-bhcpn" Feb 02 18:18:57 crc kubenswrapper[4858]: I0202 18:18:57.808153 4858 patch_prober.go:28] interesting pod/machine-config-daemon-lbvl2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 18:18:57 crc kubenswrapper[4858]: I0202 18:18:57.808944 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 18:18:57 crc kubenswrapper[4858]: I0202 18:18:57.809024 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" Feb 02 18:18:57 crc kubenswrapper[4858]: I0202 18:18:57.810021 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2dceb4d9313d305ccb6d7b2e00d45b1d22fccca33728887d2fdd371ccc924f5c"} pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 18:18:57 crc kubenswrapper[4858]: I0202 18:18:57.810087 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerName="machine-config-daemon" containerID="cri-o://2dceb4d9313d305ccb6d7b2e00d45b1d22fccca33728887d2fdd371ccc924f5c" gracePeriod=600 Feb 02 18:18:57 crc kubenswrapper[4858]: E0202 18:18:57.936791 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:18:58 crc kubenswrapper[4858]: I0202 18:18:58.273069 4858 generic.go:334] "Generic (PLEG): container finished" podID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" containerID="2dceb4d9313d305ccb6d7b2e00d45b1d22fccca33728887d2fdd371ccc924f5c" exitCode=0 Feb 02 18:18:58 crc kubenswrapper[4858]: I0202 18:18:58.273122 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" event={"ID":"d03a4872-ca6a-4233-bdbf-b31f7890dc3e","Type":"ContainerDied","Data":"2dceb4d9313d305ccb6d7b2e00d45b1d22fccca33728887d2fdd371ccc924f5c"} Feb 02 18:18:58 crc kubenswrapper[4858]: I0202 18:18:58.273185 4858 scope.go:117] "RemoveContainer" containerID="0407b826940258d2b90ce9df3d656cf4cd038bfd8d47c76fe3dfc58f88e9b7c6" Feb 02 18:18:58 crc kubenswrapper[4858]: I0202 18:18:58.273785 4858 scope.go:117] "RemoveContainer" containerID="2dceb4d9313d305ccb6d7b2e00d45b1d22fccca33728887d2fdd371ccc924f5c" Feb 02 18:18:58 crc kubenswrapper[4858]: E0202 18:18:58.274082 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:19:03 crc kubenswrapper[4858]: I0202 18:19:03.835174 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-59c6cb6f96-ss676_8dbf9cae-d42c-47ae-b117-3fd56628b72f/barbican-api/0.log" Feb 02 18:19:03 crc kubenswrapper[4858]: I0202 18:19:03.955418 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-59c6cb6f96-ss676_8dbf9cae-d42c-47ae-b117-3fd56628b72f/barbican-api-log/0.log" Feb 02 18:19:04 crc kubenswrapper[4858]: I0202 18:19:04.003518 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-749df8c57d-rd7dc_08f67234-d648-4127-98d7-fcf00df7e1d3/barbican-keystone-listener/0.log" Feb 02 18:19:04 crc kubenswrapper[4858]: I0202 18:19:04.049001 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-749df8c57d-rd7dc_08f67234-d648-4127-98d7-fcf00df7e1d3/barbican-keystone-listener-log/0.log" Feb 02 18:19:04 crc kubenswrapper[4858]: I0202 18:19:04.189020 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-8b9496955-6bsmq_9cd8cddc-99bb-4e60-85e5-07d6090cfd49/barbican-worker/0.log" Feb 02 18:19:04 crc kubenswrapper[4858]: I0202 18:19:04.246890 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-8b9496955-6bsmq_9cd8cddc-99bb-4e60-85e5-07d6090cfd49/barbican-worker-log/0.log" Feb 02 18:19:04 crc kubenswrapper[4858]: I0202 18:19:04.445779 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-pr5qm_d0787e12-6645-4df3-8850-b9698b323f69/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 18:19:04 crc kubenswrapper[4858]: I0202 18:19:04.515252 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_32e8a9b4-688e-42b5-8562-23463e2632c1/ceilometer-central-agent/0.log" Feb 02 18:19:04 crc kubenswrapper[4858]: I0202 18:19:04.600229 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_32e8a9b4-688e-42b5-8562-23463e2632c1/proxy-httpd/0.log" Feb 02 18:19:04 crc kubenswrapper[4858]: I0202 18:19:04.611258 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_32e8a9b4-688e-42b5-8562-23463e2632c1/ceilometer-notification-agent/0.log" Feb 02 18:19:04 crc kubenswrapper[4858]: I0202 18:19:04.661659 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_32e8a9b4-688e-42b5-8562-23463e2632c1/sg-core/0.log" Feb 02 18:19:04 crc kubenswrapper[4858]: I0202 18:19:04.811432 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c8a1f97c-10b9-489f-9711-d6cd63f6e974/cinder-api-log/0.log" Feb 02 18:19:04 crc kubenswrapper[4858]: I0202 18:19:04.840877 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c8a1f97c-10b9-489f-9711-d6cd63f6e974/cinder-api/0.log" Feb 02 18:19:04 crc kubenswrapper[4858]: I0202 18:19:04.993009 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_34935d73-a8f5-4b92-83fc-734815dbb836/cinder-scheduler/0.log" Feb 02 18:19:05 crc kubenswrapper[4858]: I0202 18:19:05.061661 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_34935d73-a8f5-4b92-83fc-734815dbb836/probe/0.log" Feb 02 18:19:05 crc kubenswrapper[4858]: I0202 18:19:05.142082 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-6h2qq_07f60796-9efa-4245-955f-14c0c16c918d/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 18:19:05 crc kubenswrapper[4858]: I0202 18:19:05.310300 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-9qjdw_18853ae6-771f-43f8-a6e9-5501f381891d/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 18:19:05 crc kubenswrapper[4858]: I0202 18:19:05.391195 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-96qx9_435e285f-7731-45f3-8c96-282da49d50bf/init/0.log" Feb 02 18:19:05 crc kubenswrapper[4858]: I0202 18:19:05.539350 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-96qx9_435e285f-7731-45f3-8c96-282da49d50bf/init/0.log" Feb 02 18:19:05 crc kubenswrapper[4858]: I0202 18:19:05.595960 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-96qx9_435e285f-7731-45f3-8c96-282da49d50bf/dnsmasq-dns/0.log" Feb 02 18:19:05 crc kubenswrapper[4858]: I0202 18:19:05.621921 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-27k29_b94ab7ee-11a9-42ea-ae40-32926a53ed9a/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 18:19:05 crc kubenswrapper[4858]: I0202 18:19:05.791924 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_44559c36-6bc9-41d7-810f-f68bb1ed9d18/glance-httpd/0.log" Feb 02 18:19:05 crc kubenswrapper[4858]: I0202 18:19:05.904166 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_44559c36-6bc9-41d7-810f-f68bb1ed9d18/glance-log/0.log" Feb 02 18:19:06 crc kubenswrapper[4858]: I0202 18:19:06.006876 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_15e52f85-8dc6-46f7-8844-701c3e76839c/glance-httpd/0.log" Feb 02 18:19:06 crc kubenswrapper[4858]: I0202 18:19:06.016616 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_15e52f85-8dc6-46f7-8844-701c3e76839c/glance-log/0.log" Feb 02 18:19:06 crc kubenswrapper[4858]: I0202 18:19:06.152007 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-68f4b57796-rhdnw_4a208969-437b-449b-ba53-89364175a52a/horizon/0.log" Feb 02 18:19:06 crc kubenswrapper[4858]: I0202 18:19:06.344562 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-4gs56_c9df746d-9cca-49c2-88e3-8be52b5e9531/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 18:19:06 crc kubenswrapper[4858]: I0202 18:19:06.493122 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-65b4t_dbcce266-9b8e-489e-935d-17695dd8cf62/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 18:19:06 crc kubenswrapper[4858]: I0202 18:19:06.557064 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-68f4b57796-rhdnw_4a208969-437b-449b-ba53-89364175a52a/horizon-log/0.log" Feb 02 18:19:06 crc kubenswrapper[4858]: I0202 18:19:06.771195 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29500921-5cpwt_aedf0b15-0748-4ad7-afce-e421d046a585/keystone-cron/0.log" Feb 02 18:19:06 crc kubenswrapper[4858]: I0202 18:19:06.847326 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6fb4977965-lqqjm_e92af156-c3ae-4bdc-bf59-b07c51dbaef6/keystone-api/0.log" Feb 02 18:19:06 crc kubenswrapper[4858]: I0202 18:19:06.974095 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_25eaef2f-c235-44b2-847b-6d4a275f1c3d/kube-state-metrics/0.log" Feb 02 18:19:07 crc kubenswrapper[4858]: I0202 18:19:07.048479 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-bqcrc_2ab876f9-d750-4647-8212-6f9c4bee6eee/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 18:19:07 crc kubenswrapper[4858]: I0202 18:19:07.489723 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5765cfccfc-zqg5s_985d2863-cf61-4125-9842-28ec8706dea9/neutron-httpd/0.log" Feb 02 18:19:07 crc kubenswrapper[4858]: I0202 18:19:07.492084 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5765cfccfc-zqg5s_985d2863-cf61-4125-9842-28ec8706dea9/neutron-api/0.log" Feb 02 18:19:07 crc kubenswrapper[4858]: I0202 18:19:07.694847 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-2qff2_888cd580-fe65-443a-ac8f-351364f34183/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 18:19:08 crc kubenswrapper[4858]: I0202 18:19:08.246059 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_5402e6ff-48ec-47b2-b68e-3385e51ec388/nova-api-log/0.log" Feb 02 18:19:08 crc kubenswrapper[4858]: I0202 18:19:08.290630 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_cf4ff043-2e61-44ec-a4ca-b93c524edf89/nova-cell0-conductor-conductor/0.log" Feb 02 18:19:08 crc kubenswrapper[4858]: I0202 18:19:08.589088 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_846f9c74-1b28-40d3-b2f9-ed7b380fa34f/nova-cell1-conductor-conductor/0.log" Feb 02 18:19:08 crc kubenswrapper[4858]: I0202 18:19:08.700598 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_5402e6ff-48ec-47b2-b68e-3385e51ec388/nova-api-api/0.log" Feb 02 18:19:08 crc kubenswrapper[4858]: I0202 18:19:08.706842 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_319e3f38-af96-4ac6-9791-094f9a7d67ab/nova-cell1-novncproxy-novncproxy/0.log" Feb 02 18:19:08 crc kubenswrapper[4858]: I0202 18:19:08.849886 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-q9xnp_6b5342fc-b2c3-4a83-a74d-a49a34ac15a4/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 18:19:09 crc kubenswrapper[4858]: I0202 18:19:09.082673 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_52ad277d-ba1e-4129-b696-f4fa1a598d72/nova-metadata-log/0.log" Feb 02 18:19:09 crc kubenswrapper[4858]: I0202 18:19:09.385259 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_8a3a3fdc-3021-44f0-8520-da5a88cf03e1/mysql-bootstrap/0.log" Feb 02 18:19:09 crc kubenswrapper[4858]: I0202 18:19:09.489552 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_7215c3c5-9746-4192-b018-0c31b42cee4d/nova-scheduler-scheduler/0.log" Feb 02 18:19:09 crc kubenswrapper[4858]: I0202 18:19:09.588412 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_8a3a3fdc-3021-44f0-8520-da5a88cf03e1/galera/0.log" Feb 02 18:19:09 crc kubenswrapper[4858]: I0202 18:19:09.598209 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_8a3a3fdc-3021-44f0-8520-da5a88cf03e1/mysql-bootstrap/0.log" Feb 02 18:19:09 crc kubenswrapper[4858]: I0202 18:19:09.804802 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3a24f351-b5a8-444d-b67d-7b9635f5a8aa/mysql-bootstrap/0.log" Feb 02 18:19:10 crc kubenswrapper[4858]: I0202 18:19:10.069090 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3a24f351-b5a8-444d-b67d-7b9635f5a8aa/mysql-bootstrap/0.log" Feb 02 18:19:10 crc kubenswrapper[4858]: I0202 18:19:10.072496 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3a24f351-b5a8-444d-b67d-7b9635f5a8aa/galera/0.log" Feb 02 18:19:10 crc kubenswrapper[4858]: I0202 18:19:10.258416 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_d0882d39-e033-4ce8-8b09-76d55e1c281c/openstackclient/0.log" Feb 02 18:19:10 crc kubenswrapper[4858]: I0202 18:19:10.351299 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-h6kmt_334dab9b-9793-4424-9c39-27eac5f07626/ovn-controller/0.log" Feb 02 18:19:10 crc kubenswrapper[4858]: I0202 18:19:10.394192 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_52ad277d-ba1e-4129-b696-f4fa1a598d72/nova-metadata-metadata/0.log" Feb 02 18:19:10 crc kubenswrapper[4858]: I0202 18:19:10.407109 4858 scope.go:117] "RemoveContainer" containerID="2dceb4d9313d305ccb6d7b2e00d45b1d22fccca33728887d2fdd371ccc924f5c" Feb 02 18:19:10 crc kubenswrapper[4858]: E0202 18:19:10.407629 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:19:10 crc kubenswrapper[4858]: I0202 18:19:10.579594 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-g5d9v_de17af80-1849-4a19-ae89-50057bc76aa3/openstack-network-exporter/0.log" Feb 02 18:19:10 crc kubenswrapper[4858]: I0202 18:19:10.596311 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tc4gv_77df6a52-36fd-44ea-b30e-33041ed49ed6/ovsdb-server-init/0.log" Feb 02 18:19:10 crc kubenswrapper[4858]: I0202 18:19:10.857848 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tc4gv_77df6a52-36fd-44ea-b30e-33041ed49ed6/ovs-vswitchd/0.log" Feb 02 18:19:10 crc kubenswrapper[4858]: I0202 18:19:10.858067 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tc4gv_77df6a52-36fd-44ea-b30e-33041ed49ed6/ovsdb-server-init/0.log" Feb 02 18:19:10 crc kubenswrapper[4858]: I0202 18:19:10.935517 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tc4gv_77df6a52-36fd-44ea-b30e-33041ed49ed6/ovsdb-server/0.log" Feb 02 18:19:11 crc kubenswrapper[4858]: I0202 18:19:11.342222 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-scsrh_d14bee68-7779-4c77-916e-a58d2a871918/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 18:19:11 crc kubenswrapper[4858]: I0202 18:19:11.363570 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_3c6b95f0-73a1-4b25-9905-2fa224e52142/openstack-network-exporter/0.log" Feb 02 18:19:11 crc kubenswrapper[4858]: I0202 18:19:11.365792 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_3c6b95f0-73a1-4b25-9905-2fa224e52142/ovn-northd/0.log" Feb 02 18:19:11 crc kubenswrapper[4858]: I0202 18:19:11.542161 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_10f1d4cf-2e13-41b0-b29a-f889e2acf0d0/openstack-network-exporter/0.log" Feb 02 18:19:11 crc kubenswrapper[4858]: I0202 18:19:11.613156 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_10f1d4cf-2e13-41b0-b29a-f889e2acf0d0/ovsdbserver-nb/0.log" Feb 02 18:19:11 crc kubenswrapper[4858]: I0202 18:19:11.781550 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a62694a3-fa2d-4765-ac02-3d19c4779d21/openstack-network-exporter/0.log" Feb 02 18:19:11 crc kubenswrapper[4858]: I0202 18:19:11.798133 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a62694a3-fa2d-4765-ac02-3d19c4779d21/ovsdbserver-sb/0.log" Feb 02 18:19:12 crc kubenswrapper[4858]: I0202 18:19:12.034689 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-b4fd7664d-fqkmq_113e6fbe-f0ce-497b-8a16-fb8bc217b584/placement-api/0.log" Feb 02 18:19:12 crc kubenswrapper[4858]: I0202 18:19:12.081125 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-b4fd7664d-fqkmq_113e6fbe-f0ce-497b-8a16-fb8bc217b584/placement-log/0.log" Feb 02 18:19:12 crc kubenswrapper[4858]: I0202 18:19:12.120703 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_09b56fe4-3166-4448-a186-95f3c74199f1/setup-container/0.log" Feb 02 18:19:12 crc kubenswrapper[4858]: I0202 18:19:12.351862 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_09b56fe4-3166-4448-a186-95f3c74199f1/rabbitmq/0.log" Feb 02 18:19:12 crc kubenswrapper[4858]: I0202 18:19:12.376385 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_09b56fe4-3166-4448-a186-95f3c74199f1/setup-container/0.log" Feb 02 18:19:12 crc kubenswrapper[4858]: I0202 18:19:12.443516 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f470a8b9-224f-436f-bbbb-c6ab6b1f587e/setup-container/0.log" Feb 02 18:19:12 crc kubenswrapper[4858]: I0202 18:19:12.603867 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f470a8b9-224f-436f-bbbb-c6ab6b1f587e/setup-container/0.log" Feb 02 18:19:12 crc kubenswrapper[4858]: I0202 18:19:12.664607 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f470a8b9-224f-436f-bbbb-c6ab6b1f587e/rabbitmq/0.log" Feb 02 18:19:12 crc kubenswrapper[4858]: I0202 18:19:12.724182 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-tvvsw_29249271-e3d7-41c6-8795-5c1b969161e0/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 18:19:12 crc kubenswrapper[4858]: I0202 18:19:12.920509 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-h7vtt_ae4faa34-1d19-468e-9fc2-f2bb7ad7aa0c/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 18:19:12 crc kubenswrapper[4858]: I0202 18:19:12.933929 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-6hs2p_6ff7fb8a-e464-4395-a2a3-60ed7a06ba5b/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 18:19:13 crc kubenswrapper[4858]: I0202 18:19:13.165372 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-ztwkx_4a932e2b-79f7-41ef-b7e6-1e0789b67551/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 18:19:13 crc kubenswrapper[4858]: I0202 18:19:13.305296 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-2d55x_4ef22884-b1a4-454a-afa5-cde0aaa3439b/ssh-known-hosts-edpm-deployment/0.log" Feb 02 18:19:13 crc kubenswrapper[4858]: I0202 18:19:13.480141 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7748685595-fdxjj_6cdd18b7-595d-4635-9a17-32be92896da1/proxy-server/0.log" Feb 02 18:19:13 crc kubenswrapper[4858]: I0202 18:19:13.587307 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7748685595-fdxjj_6cdd18b7-595d-4635-9a17-32be92896da1/proxy-httpd/0.log" Feb 02 18:19:13 crc kubenswrapper[4858]: I0202 18:19:13.708212 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-44qfs_bf16bc74-b9cb-4774-b646-a4de84eb4dd9/swift-ring-rebalance/0.log" Feb 02 18:19:13 crc kubenswrapper[4858]: I0202 18:19:13.769005 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_703d6256-20d4-45fc-9a4c-ec6970ea250d/account-reaper/0.log" Feb 02 18:19:13 crc kubenswrapper[4858]: I0202 18:19:13.821858 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_703d6256-20d4-45fc-9a4c-ec6970ea250d/account-auditor/0.log" Feb 02 18:19:13 crc kubenswrapper[4858]: I0202 18:19:13.951233 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_703d6256-20d4-45fc-9a4c-ec6970ea250d/account-replicator/0.log" Feb 02 18:19:13 crc kubenswrapper[4858]: I0202 18:19:13.989961 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_703d6256-20d4-45fc-9a4c-ec6970ea250d/account-server/0.log" Feb 02 18:19:14 crc kubenswrapper[4858]: I0202 18:19:14.013834 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_703d6256-20d4-45fc-9a4c-ec6970ea250d/container-auditor/0.log" Feb 02 18:19:14 crc kubenswrapper[4858]: I0202 18:19:14.073892 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_703d6256-20d4-45fc-9a4c-ec6970ea250d/container-replicator/0.log" Feb 02 18:19:14 crc kubenswrapper[4858]: I0202 18:19:14.164233 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_703d6256-20d4-45fc-9a4c-ec6970ea250d/container-updater/0.log" Feb 02 18:19:14 crc kubenswrapper[4858]: I0202 18:19:14.211167 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_703d6256-20d4-45fc-9a4c-ec6970ea250d/object-auditor/0.log" Feb 02 18:19:14 crc kubenswrapper[4858]: I0202 18:19:14.225598 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_703d6256-20d4-45fc-9a4c-ec6970ea250d/container-server/0.log" Feb 02 18:19:14 crc kubenswrapper[4858]: I0202 18:19:14.313274 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_703d6256-20d4-45fc-9a4c-ec6970ea250d/object-expirer/0.log" Feb 02 18:19:14 crc kubenswrapper[4858]: I0202 18:19:14.422848 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_703d6256-20d4-45fc-9a4c-ec6970ea250d/object-server/0.log" Feb 02 18:19:14 crc kubenswrapper[4858]: I0202 18:19:14.429872 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_703d6256-20d4-45fc-9a4c-ec6970ea250d/object-updater/0.log" Feb 02 18:19:14 crc kubenswrapper[4858]: I0202 18:19:14.434915 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_703d6256-20d4-45fc-9a4c-ec6970ea250d/object-replicator/0.log" Feb 02 18:19:14 crc kubenswrapper[4858]: I0202 18:19:14.553623 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_703d6256-20d4-45fc-9a4c-ec6970ea250d/rsync/0.log" Feb 02 18:19:14 crc kubenswrapper[4858]: I0202 18:19:14.698619 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_703d6256-20d4-45fc-9a4c-ec6970ea250d/swift-recon-cron/0.log" Feb 02 18:19:14 crc kubenswrapper[4858]: I0202 18:19:14.759032 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-spthf_dd969e2b-6db6-4175-8fa3-7dfa60a198ca/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 18:19:14 crc kubenswrapper[4858]: I0202 18:19:14.879365 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_6b7d74b4-dbbf-45f2-ae67-2842bbfc8c52/tempest-tests-tempest-tests-runner/0.log" Feb 02 18:19:14 crc kubenswrapper[4858]: I0202 18:19:14.918090 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_3a3e6ddb-d991-4bf6-a248-b333da853203/test-operator-logs-container/0.log" Feb 02 18:19:15 crc kubenswrapper[4858]: I0202 18:19:15.141310 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-9g4rw_9ef2cfe2-ae32-4aca-9bd9-e37ab2b5bc43/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 02 18:19:22 crc kubenswrapper[4858]: I0202 18:19:22.401384 4858 scope.go:117] "RemoveContainer" containerID="2dceb4d9313d305ccb6d7b2e00d45b1d22fccca33728887d2fdd371ccc924f5c" Feb 02 18:19:22 crc kubenswrapper[4858]: E0202 18:19:22.401857 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:19:25 crc kubenswrapper[4858]: I0202 18:19:25.557960 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_c386da2d-4b55-47da-aa8c-82b879ae7d3d/memcached/0.log" Feb 02 18:19:37 crc kubenswrapper[4858]: I0202 18:19:37.400984 4858 scope.go:117] "RemoveContainer" containerID="2dceb4d9313d305ccb6d7b2e00d45b1d22fccca33728887d2fdd371ccc924f5c" Feb 02 18:19:37 crc kubenswrapper[4858]: E0202 18:19:37.401800 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:19:40 crc kubenswrapper[4858]: I0202 18:19:40.909459 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5_b4156898-70e8-4bdc-a254-49c0917d38dc/util/0.log" Feb 02 18:19:41 crc kubenswrapper[4858]: I0202 18:19:41.062247 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5_b4156898-70e8-4bdc-a254-49c0917d38dc/util/0.log" Feb 02 18:19:41 crc kubenswrapper[4858]: I0202 18:19:41.088817 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5_b4156898-70e8-4bdc-a254-49c0917d38dc/pull/0.log" Feb 02 18:19:41 crc kubenswrapper[4858]: I0202 18:19:41.176111 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5_b4156898-70e8-4bdc-a254-49c0917d38dc/pull/0.log" Feb 02 18:19:41 crc kubenswrapper[4858]: I0202 18:19:41.313272 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5_b4156898-70e8-4bdc-a254-49c0917d38dc/extract/0.log" Feb 02 18:19:41 crc kubenswrapper[4858]: I0202 18:19:41.332443 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5_b4156898-70e8-4bdc-a254-49c0917d38dc/pull/0.log" Feb 02 18:19:41 crc kubenswrapper[4858]: I0202 18:19:41.346949 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_15f5b86401579fb9630133b6201588169551604bbdb88098b8d504caf792pw5_b4156898-70e8-4bdc-a254-49c0917d38dc/util/0.log" Feb 02 18:19:41 crc kubenswrapper[4858]: I0202 18:19:41.565574 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-r786j_a7c0be68-b4e3-47dc-b6c0-acd8878465ee/manager/0.log" Feb 02 18:19:41 crc kubenswrapper[4858]: I0202 18:19:41.570825 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-rgpmv_e61e293a-bb2a-4ccd-bc20-815cc2bfb01b/manager/0.log" Feb 02 18:19:41 crc kubenswrapper[4858]: I0202 18:19:41.904008 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-kcbss_f700cc0f-80eb-46a5-b7d3-b32dccdc2f49/manager/0.log" Feb 02 18:19:42 crc kubenswrapper[4858]: I0202 18:19:42.033331 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-p9qwv_ad1072ec-d0e8-49ff-9971-8f6589bde802/manager/0.log" Feb 02 18:19:42 crc kubenswrapper[4858]: I0202 18:19:42.073275 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-99rfw_76ec111a-d121-411c-9d81-8fcfd6323d49/manager/0.log" Feb 02 18:19:42 crc kubenswrapper[4858]: I0202 18:19:42.189206 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-kcgxf_8eca62a8-4909-4402-89ff-bd59ad42daef/manager/0.log" Feb 02 18:19:42 crc kubenswrapper[4858]: I0202 18:19:42.466893 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-kffpf_44678b87-d59f-4661-93c9-8e2ddb8ea61e/manager/0.log" Feb 02 18:19:42 crc kubenswrapper[4858]: I0202 18:19:42.485656 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-ck77w_5b2eeae9-b158-4d59-8056-b12e1a397d18/manager/0.log" Feb 02 18:19:42 crc kubenswrapper[4858]: I0202 18:19:42.687608 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-rmkrp_8c70d2b3-c4e9-422f-ace6-f11450c068ec/manager/0.log" Feb 02 18:19:42 crc kubenswrapper[4858]: I0202 18:19:42.722464 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-7cq6h_096752c5-391b-4370-b5f6-39ef63d6878e/manager/0.log" Feb 02 18:19:42 crc kubenswrapper[4858]: I0202 18:19:42.870350 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-2p2q6_2600f62e-5615-4217-9629-9b77846634f9/manager/0.log" Feb 02 18:19:42 crc kubenswrapper[4858]: I0202 18:19:42.976092 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-vpbp7_f5578b04-55cc-4bb9-a3f5-27e63ffe0c27/manager/0.log" Feb 02 18:19:43 crc kubenswrapper[4858]: I0202 18:19:43.153047 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-tfvcz_405115c4-bd24-4b05-b437-a8a27bc1f2b5/manager/0.log" Feb 02 18:19:43 crc kubenswrapper[4858]: I0202 18:19:43.167402 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-2g5g6_b6d0d2c9-a689-4bcf-b3c8-b8aa25e47898/manager/0.log" Feb 02 18:19:43 crc kubenswrapper[4858]: I0202 18:19:43.327669 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dsvmkf_00e707da-7230-4214-82a0-e1b18aad70a8/manager/0.log" Feb 02 18:19:43 crc kubenswrapper[4858]: I0202 18:19:43.500783 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7b85844457-9n8fp_332ff13e-699a-4582-873c-073c20cb6ca0/operator/0.log" Feb 02 18:19:43 crc kubenswrapper[4858]: I0202 18:19:43.828409 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-6cl4c_b123c5f8-831f-41b2-a1d0-fcde62501499/registry-server/0.log" Feb 02 18:19:43 crc kubenswrapper[4858]: I0202 18:19:43.913432 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-8r9sc_80f2567c-89d7-4350-a7f2-acd472bc2f68/manager/0.log" Feb 02 18:19:44 crc kubenswrapper[4858]: I0202 18:19:44.174472 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-srbzf_16a9ca97-2b15-4a52-8d2c-eb170a3f2b75/manager/0.log" Feb 02 18:19:44 crc kubenswrapper[4858]: I0202 18:19:44.384238 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-8lqf9_3733a396-b067-4153-891a-1c5b044a7e04/operator/0.log" Feb 02 18:19:44 crc kubenswrapper[4858]: I0202 18:19:44.425733 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-zlfqp_4a72e4a0-8e70-4d04-85c8-15b68840632d/manager/0.log" Feb 02 18:19:44 crc kubenswrapper[4858]: I0202 18:19:44.598245 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-86df59f79f-rczsp_ad13cd52-7254-489a-8960-511bbc2a3360/manager/0.log" Feb 02 18:19:44 crc kubenswrapper[4858]: I0202 18:19:44.665232 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-4jd6l_366ee9f4-9c6e-416a-8603-f6bac0530a6a/manager/0.log" Feb 02 18:19:44 crc kubenswrapper[4858]: I0202 18:19:44.816764 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-z7cp9_da72a0b3-6998-4d0e-b7d3-f4fce5f11f1b/manager/0.log" Feb 02 18:19:44 crc kubenswrapper[4858]: I0202 18:19:44.864132 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-rl5z2_467af09f-e1d2-407e-989e-606a3a3219b0/manager/0.log" Feb 02 18:19:48 crc kubenswrapper[4858]: I0202 18:19:48.401083 4858 scope.go:117] "RemoveContainer" containerID="2dceb4d9313d305ccb6d7b2e00d45b1d22fccca33728887d2fdd371ccc924f5c" Feb 02 18:19:48 crc kubenswrapper[4858]: E0202 18:19:48.402033 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:19:59 crc kubenswrapper[4858]: I0202 18:19:59.401737 4858 scope.go:117] "RemoveContainer" containerID="2dceb4d9313d305ccb6d7b2e00d45b1d22fccca33728887d2fdd371ccc924f5c" Feb 02 18:19:59 crc kubenswrapper[4858]: E0202 18:19:59.402514 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:20:04 crc kubenswrapper[4858]: I0202 18:20:04.036350 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-hgpp5_e330b41c-dacd-4c4b-a013-dd16a913ac54/control-plane-machine-set-operator/0.log" Feb 02 18:20:04 crc kubenswrapper[4858]: I0202 18:20:04.159284 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-ssvjj_f4d39c6c-15e3-48a3-82be-2bc3703dbc7f/kube-rbac-proxy/0.log" Feb 02 18:20:04 crc kubenswrapper[4858]: I0202 18:20:04.164749 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-ssvjj_f4d39c6c-15e3-48a3-82be-2bc3703dbc7f/machine-api-operator/0.log" Feb 02 18:20:13 crc kubenswrapper[4858]: I0202 18:20:13.401327 4858 scope.go:117] "RemoveContainer" containerID="2dceb4d9313d305ccb6d7b2e00d45b1d22fccca33728887d2fdd371ccc924f5c" Feb 02 18:20:13 crc kubenswrapper[4858]: E0202 18:20:13.402052 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:20:17 crc kubenswrapper[4858]: I0202 18:20:17.973024 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-dzxc6_bc586ae0-865f-490b-8ca0-bb157144af30/cert-manager-controller/0.log" Feb 02 18:20:18 crc kubenswrapper[4858]: I0202 18:20:18.075110 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-7kj75_b09a6151-2124-4f22-b226-a1ae36869433/cert-manager-cainjector/0.log" Feb 02 18:20:18 crc kubenswrapper[4858]: I0202 18:20:18.162004 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-bhlzx_786fe412-07f2-458a-bb89-f77dc747524c/cert-manager-webhook/0.log" Feb 02 18:20:28 crc kubenswrapper[4858]: I0202 18:20:28.401525 4858 scope.go:117] "RemoveContainer" containerID="2dceb4d9313d305ccb6d7b2e00d45b1d22fccca33728887d2fdd371ccc924f5c" Feb 02 18:20:28 crc kubenswrapper[4858]: E0202 18:20:28.402283 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:20:32 crc kubenswrapper[4858]: I0202 18:20:32.823337 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-f8nz8_a69645ff-c03c-4296-aa6a-63cd14095040/nmstate-console-plugin/0.log" Feb 02 18:20:32 crc kubenswrapper[4858]: I0202 18:20:32.940721 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-9rsf6_35bb0d37-e388-42c3-ad03-2cbb0e4a9409/nmstate-handler/0.log" Feb 02 18:20:33 crc kubenswrapper[4858]: I0202 18:20:33.068328 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-8fldl_cdee88da-b22d-4fe4-98a2-a53cadedb993/kube-rbac-proxy/0.log" Feb 02 18:20:33 crc kubenswrapper[4858]: I0202 18:20:33.157107 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-8fldl_cdee88da-b22d-4fe4-98a2-a53cadedb993/nmstate-metrics/0.log" Feb 02 18:20:33 crc kubenswrapper[4858]: I0202 18:20:33.275142 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-pmkq5_f3603e7c-ff14-4deb-a9d8-e5751a729be6/nmstate-operator/0.log" Feb 02 18:20:33 crc kubenswrapper[4858]: I0202 18:20:33.411627 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-t9gvb_11353839-2688-4112-a9d9-87bead34c26a/nmstate-webhook/0.log" Feb 02 18:20:42 crc kubenswrapper[4858]: I0202 18:20:42.400693 4858 scope.go:117] "RemoveContainer" containerID="2dceb4d9313d305ccb6d7b2e00d45b1d22fccca33728887d2fdd371ccc924f5c" Feb 02 18:20:42 crc kubenswrapper[4858]: E0202 18:20:42.401544 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:20:54 crc kubenswrapper[4858]: I0202 18:20:54.402264 4858 scope.go:117] "RemoveContainer" containerID="2dceb4d9313d305ccb6d7b2e00d45b1d22fccca33728887d2fdd371ccc924f5c" Feb 02 18:20:54 crc kubenswrapper[4858]: E0202 18:20:54.402996 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:21:02 crc kubenswrapper[4858]: I0202 18:21:02.967872 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-6bvx9_0b58bf4d-52bb-4876-8555-b8b403e0cbcb/kube-rbac-proxy/0.log" Feb 02 18:21:03 crc kubenswrapper[4858]: I0202 18:21:03.061298 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-6bvx9_0b58bf4d-52bb-4876-8555-b8b403e0cbcb/controller/0.log" Feb 02 18:21:03 crc kubenswrapper[4858]: I0202 18:21:03.261316 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svlq2_aa24ded3-4a92-4c89-bade-68547bdca597/cp-frr-files/0.log" Feb 02 18:21:03 crc kubenswrapper[4858]: I0202 18:21:03.417912 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svlq2_aa24ded3-4a92-4c89-bade-68547bdca597/cp-frr-files/0.log" Feb 02 18:21:03 crc kubenswrapper[4858]: I0202 18:21:03.446183 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svlq2_aa24ded3-4a92-4c89-bade-68547bdca597/cp-reloader/0.log" Feb 02 18:21:03 crc kubenswrapper[4858]: I0202 18:21:03.463815 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svlq2_aa24ded3-4a92-4c89-bade-68547bdca597/cp-metrics/0.log" Feb 02 18:21:03 crc kubenswrapper[4858]: I0202 18:21:03.474349 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svlq2_aa24ded3-4a92-4c89-bade-68547bdca597/cp-reloader/0.log" Feb 02 18:21:03 crc kubenswrapper[4858]: I0202 18:21:03.651924 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svlq2_aa24ded3-4a92-4c89-bade-68547bdca597/cp-frr-files/0.log" Feb 02 18:21:03 crc kubenswrapper[4858]: I0202 18:21:03.692193 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svlq2_aa24ded3-4a92-4c89-bade-68547bdca597/cp-reloader/0.log" Feb 02 18:21:03 crc kubenswrapper[4858]: I0202 18:21:03.728178 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svlq2_aa24ded3-4a92-4c89-bade-68547bdca597/cp-metrics/0.log" Feb 02 18:21:03 crc kubenswrapper[4858]: I0202 18:21:03.749577 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svlq2_aa24ded3-4a92-4c89-bade-68547bdca597/cp-metrics/0.log" Feb 02 18:21:03 crc kubenswrapper[4858]: I0202 18:21:03.886402 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svlq2_aa24ded3-4a92-4c89-bade-68547bdca597/cp-reloader/0.log" Feb 02 18:21:03 crc kubenswrapper[4858]: I0202 18:21:03.896205 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svlq2_aa24ded3-4a92-4c89-bade-68547bdca597/cp-frr-files/0.log" Feb 02 18:21:03 crc kubenswrapper[4858]: I0202 18:21:03.916373 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svlq2_aa24ded3-4a92-4c89-bade-68547bdca597/cp-metrics/0.log" Feb 02 18:21:03 crc kubenswrapper[4858]: I0202 18:21:03.962546 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svlq2_aa24ded3-4a92-4c89-bade-68547bdca597/controller/0.log" Feb 02 18:21:04 crc kubenswrapper[4858]: I0202 18:21:04.115461 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svlq2_aa24ded3-4a92-4c89-bade-68547bdca597/kube-rbac-proxy/0.log" Feb 02 18:21:04 crc kubenswrapper[4858]: I0202 18:21:04.129280 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svlq2_aa24ded3-4a92-4c89-bade-68547bdca597/frr-metrics/0.log" Feb 02 18:21:04 crc kubenswrapper[4858]: I0202 18:21:04.175331 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svlq2_aa24ded3-4a92-4c89-bade-68547bdca597/kube-rbac-proxy-frr/0.log" Feb 02 18:21:04 crc kubenswrapper[4858]: I0202 18:21:04.386074 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-pgxzh_1226b394-7ee5-4947-8d99-532106bb7baa/frr-k8s-webhook-server/0.log" Feb 02 18:21:04 crc kubenswrapper[4858]: I0202 18:21:04.389513 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svlq2_aa24ded3-4a92-4c89-bade-68547bdca597/reloader/0.log" Feb 02 18:21:05 crc kubenswrapper[4858]: I0202 18:21:05.173928 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-749875bd8b-wr4x9_3ba8c286-d0ce-40d1-b759-9d983474210b/manager/0.log" Feb 02 18:21:05 crc kubenswrapper[4858]: I0202 18:21:05.371483 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5854c4649f-zl8j4_5efe8813-bcae-42c6-be1a-6f60809e7e3e/webhook-server/0.log" Feb 02 18:21:05 crc kubenswrapper[4858]: I0202 18:21:05.426833 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-dk7fw_c72561ce-1db8-4883-97fe-488222b2f232/kube-rbac-proxy/0.log" Feb 02 18:21:05 crc kubenswrapper[4858]: I0202 18:21:05.578775 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svlq2_aa24ded3-4a92-4c89-bade-68547bdca597/frr/0.log" Feb 02 18:21:05 crc kubenswrapper[4858]: I0202 18:21:05.898174 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-dk7fw_c72561ce-1db8-4883-97fe-488222b2f232/speaker/0.log" Feb 02 18:21:07 crc kubenswrapper[4858]: I0202 18:21:07.401158 4858 scope.go:117] "RemoveContainer" containerID="2dceb4d9313d305ccb6d7b2e00d45b1d22fccca33728887d2fdd371ccc924f5c" Feb 02 18:21:07 crc kubenswrapper[4858]: E0202 18:21:07.401723 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:21:12 crc kubenswrapper[4858]: I0202 18:21:12.952434 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xxjww"] Feb 02 18:21:12 crc kubenswrapper[4858]: E0202 18:21:12.953539 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="180305bc-fe2b-4a1f-ac36-5e9f44d89e8d" containerName="container-00" Feb 02 18:21:12 crc kubenswrapper[4858]: I0202 18:21:12.953554 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="180305bc-fe2b-4a1f-ac36-5e9f44d89e8d" containerName="container-00" Feb 02 18:21:12 crc kubenswrapper[4858]: I0202 18:21:12.953788 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="180305bc-fe2b-4a1f-ac36-5e9f44d89e8d" containerName="container-00" Feb 02 18:21:12 crc kubenswrapper[4858]: I0202 18:21:12.955190 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xxjww" Feb 02 18:21:12 crc kubenswrapper[4858]: I0202 18:21:12.980812 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xxjww"] Feb 02 18:21:12 crc kubenswrapper[4858]: I0202 18:21:12.992338 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10774e06-c012-4165-aefd-2c9ae6117134-catalog-content\") pod \"redhat-marketplace-xxjww\" (UID: \"10774e06-c012-4165-aefd-2c9ae6117134\") " pod="openshift-marketplace/redhat-marketplace-xxjww" Feb 02 18:21:12 crc kubenswrapper[4858]: I0202 18:21:12.992425 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10774e06-c012-4165-aefd-2c9ae6117134-utilities\") pod \"redhat-marketplace-xxjww\" (UID: \"10774e06-c012-4165-aefd-2c9ae6117134\") " pod="openshift-marketplace/redhat-marketplace-xxjww" Feb 02 18:21:12 crc kubenswrapper[4858]: I0202 18:21:12.992479 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np5cs\" (UniqueName: \"kubernetes.io/projected/10774e06-c012-4165-aefd-2c9ae6117134-kube-api-access-np5cs\") pod \"redhat-marketplace-xxjww\" (UID: \"10774e06-c012-4165-aefd-2c9ae6117134\") " pod="openshift-marketplace/redhat-marketplace-xxjww" Feb 02 18:21:13 crc kubenswrapper[4858]: I0202 18:21:13.094438 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10774e06-c012-4165-aefd-2c9ae6117134-catalog-content\") pod \"redhat-marketplace-xxjww\" (UID: \"10774e06-c012-4165-aefd-2c9ae6117134\") " pod="openshift-marketplace/redhat-marketplace-xxjww" Feb 02 18:21:13 crc kubenswrapper[4858]: I0202 18:21:13.094557 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10774e06-c012-4165-aefd-2c9ae6117134-utilities\") pod \"redhat-marketplace-xxjww\" (UID: \"10774e06-c012-4165-aefd-2c9ae6117134\") " pod="openshift-marketplace/redhat-marketplace-xxjww" Feb 02 18:21:13 crc kubenswrapper[4858]: I0202 18:21:13.094604 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np5cs\" (UniqueName: \"kubernetes.io/projected/10774e06-c012-4165-aefd-2c9ae6117134-kube-api-access-np5cs\") pod \"redhat-marketplace-xxjww\" (UID: \"10774e06-c012-4165-aefd-2c9ae6117134\") " pod="openshift-marketplace/redhat-marketplace-xxjww" Feb 02 18:21:13 crc kubenswrapper[4858]: I0202 18:21:13.095281 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10774e06-c012-4165-aefd-2c9ae6117134-utilities\") pod \"redhat-marketplace-xxjww\" (UID: \"10774e06-c012-4165-aefd-2c9ae6117134\") " pod="openshift-marketplace/redhat-marketplace-xxjww" Feb 02 18:21:13 crc kubenswrapper[4858]: I0202 18:21:13.097291 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10774e06-c012-4165-aefd-2c9ae6117134-catalog-content\") pod \"redhat-marketplace-xxjww\" (UID: \"10774e06-c012-4165-aefd-2c9ae6117134\") " pod="openshift-marketplace/redhat-marketplace-xxjww" Feb 02 18:21:13 crc kubenswrapper[4858]: I0202 18:21:13.115107 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np5cs\" (UniqueName: \"kubernetes.io/projected/10774e06-c012-4165-aefd-2c9ae6117134-kube-api-access-np5cs\") pod \"redhat-marketplace-xxjww\" (UID: \"10774e06-c012-4165-aefd-2c9ae6117134\") " pod="openshift-marketplace/redhat-marketplace-xxjww" Feb 02 18:21:13 crc kubenswrapper[4858]: I0202 18:21:13.282268 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xxjww" Feb 02 18:21:13 crc kubenswrapper[4858]: I0202 18:21:13.767026 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xxjww"] Feb 02 18:21:14 crc kubenswrapper[4858]: I0202 18:21:14.484916 4858 generic.go:334] "Generic (PLEG): container finished" podID="10774e06-c012-4165-aefd-2c9ae6117134" containerID="8ddd7bd29e37257a69c9a84494b1bc8efa58659081d5c262d4a84f3904247fb7" exitCode=0 Feb 02 18:21:14 crc kubenswrapper[4858]: I0202 18:21:14.485475 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xxjww" event={"ID":"10774e06-c012-4165-aefd-2c9ae6117134","Type":"ContainerDied","Data":"8ddd7bd29e37257a69c9a84494b1bc8efa58659081d5c262d4a84f3904247fb7"} Feb 02 18:21:14 crc kubenswrapper[4858]: I0202 18:21:14.485510 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xxjww" event={"ID":"10774e06-c012-4165-aefd-2c9ae6117134","Type":"ContainerStarted","Data":"26f96c98e8d7d2c9ef9fdb7b2ce8bb394d17bb71558bbc2431f8c4102a300d1e"} Feb 02 18:21:14 crc kubenswrapper[4858]: I0202 18:21:14.489137 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 18:21:16 crc kubenswrapper[4858]: I0202 18:21:16.504618 4858 generic.go:334] "Generic (PLEG): container finished" podID="10774e06-c012-4165-aefd-2c9ae6117134" containerID="107f1f8fa5b79116a9355f507ff4e4720eb1ce0a889f2a222a2c8a9df261fd4b" exitCode=0 Feb 02 18:21:16 crc kubenswrapper[4858]: I0202 18:21:16.504744 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xxjww" event={"ID":"10774e06-c012-4165-aefd-2c9ae6117134","Type":"ContainerDied","Data":"107f1f8fa5b79116a9355f507ff4e4720eb1ce0a889f2a222a2c8a9df261fd4b"} Feb 02 18:21:17 crc kubenswrapper[4858]: I0202 18:21:17.516460 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xxjww" event={"ID":"10774e06-c012-4165-aefd-2c9ae6117134","Type":"ContainerStarted","Data":"15e43e851146dd523ae9f6142b67ea5472c40b2eb603cba0f902dd928f4f7914"} Feb 02 18:21:17 crc kubenswrapper[4858]: I0202 18:21:17.539425 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xxjww" podStartSLOduration=3.111470292 podStartE2EDuration="5.539401578s" podCreationTimestamp="2026-02-02 18:21:12 +0000 UTC" firstStartedPulling="2026-02-02 18:21:14.488861088 +0000 UTC m=+3975.641276363" lastFinishedPulling="2026-02-02 18:21:16.916792384 +0000 UTC m=+3978.069207649" observedRunningTime="2026-02-02 18:21:17.533222151 +0000 UTC m=+3978.685637426" watchObservedRunningTime="2026-02-02 18:21:17.539401578 +0000 UTC m=+3978.691816843" Feb 02 18:21:18 crc kubenswrapper[4858]: I0202 18:21:18.401336 4858 scope.go:117] "RemoveContainer" containerID="2dceb4d9313d305ccb6d7b2e00d45b1d22fccca33728887d2fdd371ccc924f5c" Feb 02 18:21:18 crc kubenswrapper[4858]: E0202 18:21:18.402147 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:21:19 crc kubenswrapper[4858]: I0202 18:21:19.418158 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmhk8m_10d6ebd8-c224-43f2-b27c-bb5944ad819d/util/0.log" Feb 02 18:21:19 crc kubenswrapper[4858]: I0202 18:21:19.647518 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmhk8m_10d6ebd8-c224-43f2-b27c-bb5944ad819d/pull/0.log" Feb 02 18:21:19 crc kubenswrapper[4858]: I0202 18:21:19.648522 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmhk8m_10d6ebd8-c224-43f2-b27c-bb5944ad819d/pull/0.log" Feb 02 18:21:19 crc kubenswrapper[4858]: I0202 18:21:19.657379 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmhk8m_10d6ebd8-c224-43f2-b27c-bb5944ad819d/util/0.log" Feb 02 18:21:19 crc kubenswrapper[4858]: I0202 18:21:19.918010 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmhk8m_10d6ebd8-c224-43f2-b27c-bb5944ad819d/util/0.log" Feb 02 18:21:19 crc kubenswrapper[4858]: I0202 18:21:19.974697 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmhk8m_10d6ebd8-c224-43f2-b27c-bb5944ad819d/extract/0.log" Feb 02 18:21:19 crc kubenswrapper[4858]: I0202 18:21:19.987220 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmhk8m_10d6ebd8-c224-43f2-b27c-bb5944ad819d/pull/0.log" Feb 02 18:21:20 crc kubenswrapper[4858]: I0202 18:21:20.136487 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kmq27_9c43bf8c-3e4f-4983-a524-7033f240b2f7/util/0.log" Feb 02 18:21:20 crc kubenswrapper[4858]: I0202 18:21:20.305539 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kmq27_9c43bf8c-3e4f-4983-a524-7033f240b2f7/pull/0.log" Feb 02 18:21:20 crc kubenswrapper[4858]: I0202 18:21:20.314953 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kmq27_9c43bf8c-3e4f-4983-a524-7033f240b2f7/pull/0.log" Feb 02 18:21:20 crc kubenswrapper[4858]: I0202 18:21:20.366329 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kmq27_9c43bf8c-3e4f-4983-a524-7033f240b2f7/util/0.log" Feb 02 18:21:20 crc kubenswrapper[4858]: I0202 18:21:20.549689 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kmq27_9c43bf8c-3e4f-4983-a524-7033f240b2f7/extract/0.log" Feb 02 18:21:20 crc kubenswrapper[4858]: I0202 18:21:20.556982 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kmq27_9c43bf8c-3e4f-4983-a524-7033f240b2f7/util/0.log" Feb 02 18:21:20 crc kubenswrapper[4858]: I0202 18:21:20.584933 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713kmq27_9c43bf8c-3e4f-4983-a524-7033f240b2f7/pull/0.log" Feb 02 18:21:20 crc kubenswrapper[4858]: I0202 18:21:20.721884 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q6b5c_05b84894-e183-4874-8ca5-002436026fce/extract-utilities/0.log" Feb 02 18:21:20 crc kubenswrapper[4858]: I0202 18:21:20.891283 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q6b5c_05b84894-e183-4874-8ca5-002436026fce/extract-content/0.log" Feb 02 18:21:20 crc kubenswrapper[4858]: I0202 18:21:20.898093 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q6b5c_05b84894-e183-4874-8ca5-002436026fce/extract-content/0.log" Feb 02 18:21:20 crc kubenswrapper[4858]: I0202 18:21:20.902851 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q6b5c_05b84894-e183-4874-8ca5-002436026fce/extract-utilities/0.log" Feb 02 18:21:21 crc kubenswrapper[4858]: I0202 18:21:21.090123 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q6b5c_05b84894-e183-4874-8ca5-002436026fce/extract-utilities/0.log" Feb 02 18:21:21 crc kubenswrapper[4858]: I0202 18:21:21.137923 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q6b5c_05b84894-e183-4874-8ca5-002436026fce/extract-content/0.log" Feb 02 18:21:21 crc kubenswrapper[4858]: I0202 18:21:21.355396 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-phnlb_c1397f76-ca47-41cd-860f-4ecb3e5856fb/extract-utilities/0.log" Feb 02 18:21:21 crc kubenswrapper[4858]: I0202 18:21:21.528913 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-phnlb_c1397f76-ca47-41cd-860f-4ecb3e5856fb/extract-utilities/0.log" Feb 02 18:21:21 crc kubenswrapper[4858]: I0202 18:21:21.592737 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-phnlb_c1397f76-ca47-41cd-860f-4ecb3e5856fb/extract-content/0.log" Feb 02 18:21:21 crc kubenswrapper[4858]: I0202 18:21:21.633854 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q6b5c_05b84894-e183-4874-8ca5-002436026fce/registry-server/0.log" Feb 02 18:21:21 crc kubenswrapper[4858]: I0202 18:21:21.637410 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-phnlb_c1397f76-ca47-41cd-860f-4ecb3e5856fb/extract-content/0.log" Feb 02 18:21:21 crc kubenswrapper[4858]: I0202 18:21:21.767212 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-phnlb_c1397f76-ca47-41cd-860f-4ecb3e5856fb/extract-utilities/0.log" Feb 02 18:21:21 crc kubenswrapper[4858]: I0202 18:21:21.799201 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-phnlb_c1397f76-ca47-41cd-860f-4ecb3e5856fb/extract-content/0.log" Feb 02 18:21:22 crc kubenswrapper[4858]: I0202 18:21:22.049492 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-g7r5b_49416635-c370-4a58-aa72-0c1d52fab5f3/marketplace-operator/0.log" Feb 02 18:21:22 crc kubenswrapper[4858]: I0202 18:21:22.191015 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c22x2_852607b1-d3cd-4688-a469-872ae6c5e98d/extract-utilities/0.log" Feb 02 18:21:22 crc kubenswrapper[4858]: I0202 18:21:22.382463 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c22x2_852607b1-d3cd-4688-a469-872ae6c5e98d/extract-utilities/0.log" Feb 02 18:21:22 crc kubenswrapper[4858]: I0202 18:21:22.414637 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c22x2_852607b1-d3cd-4688-a469-872ae6c5e98d/extract-content/0.log" Feb 02 18:21:22 crc kubenswrapper[4858]: I0202 18:21:22.448530 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c22x2_852607b1-d3cd-4688-a469-872ae6c5e98d/extract-content/0.log" Feb 02 18:21:22 crc kubenswrapper[4858]: I0202 18:21:22.473611 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-phnlb_c1397f76-ca47-41cd-860f-4ecb3e5856fb/registry-server/0.log" Feb 02 18:21:22 crc kubenswrapper[4858]: I0202 18:21:22.635785 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c22x2_852607b1-d3cd-4688-a469-872ae6c5e98d/extract-content/0.log" Feb 02 18:21:22 crc kubenswrapper[4858]: I0202 18:21:22.686143 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c22x2_852607b1-d3cd-4688-a469-872ae6c5e98d/extract-utilities/0.log" Feb 02 18:21:22 crc kubenswrapper[4858]: I0202 18:21:22.802148 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c22x2_852607b1-d3cd-4688-a469-872ae6c5e98d/registry-server/0.log" Feb 02 18:21:22 crc kubenswrapper[4858]: I0202 18:21:22.856307 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xxjww_10774e06-c012-4165-aefd-2c9ae6117134/extract-utilities/0.log" Feb 02 18:21:23 crc kubenswrapper[4858]: I0202 18:21:23.042380 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xxjww_10774e06-c012-4165-aefd-2c9ae6117134/extract-content/0.log" Feb 02 18:21:23 crc kubenswrapper[4858]: I0202 18:21:23.085844 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xxjww_10774e06-c012-4165-aefd-2c9ae6117134/extract-utilities/0.log" Feb 02 18:21:23 crc kubenswrapper[4858]: I0202 18:21:23.093487 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xxjww_10774e06-c012-4165-aefd-2c9ae6117134/extract-content/0.log" Feb 02 18:21:23 crc kubenswrapper[4858]: I0202 18:21:23.270074 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xxjww_10774e06-c012-4165-aefd-2c9ae6117134/extract-content/0.log" Feb 02 18:21:23 crc kubenswrapper[4858]: I0202 18:21:23.280159 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xxjww_10774e06-c012-4165-aefd-2c9ae6117134/extract-utilities/0.log" Feb 02 18:21:23 crc kubenswrapper[4858]: I0202 18:21:23.283231 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xxjww_10774e06-c012-4165-aefd-2c9ae6117134/registry-server/0.log" Feb 02 18:21:23 crc kubenswrapper[4858]: I0202 18:21:23.282582 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xxjww" Feb 02 18:21:23 crc kubenswrapper[4858]: I0202 18:21:23.284059 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xxjww" Feb 02 18:21:23 crc kubenswrapper[4858]: I0202 18:21:23.340310 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xxjww" Feb 02 18:21:23 crc kubenswrapper[4858]: I0202 18:21:23.456669 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-llwt8_454bd674-ee00-420b-9910-16fe062ea116/extract-utilities/0.log" Feb 02 18:21:23 crc kubenswrapper[4858]: I0202 18:21:23.627356 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xxjww" Feb 02 18:21:23 crc kubenswrapper[4858]: I0202 18:21:23.680138 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xxjww"] Feb 02 18:21:23 crc kubenswrapper[4858]: I0202 18:21:23.685180 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-llwt8_454bd674-ee00-420b-9910-16fe062ea116/extract-content/0.log" Feb 02 18:21:23 crc kubenswrapper[4858]: I0202 18:21:23.702649 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-llwt8_454bd674-ee00-420b-9910-16fe062ea116/extract-utilities/0.log" Feb 02 18:21:23 crc kubenswrapper[4858]: I0202 18:21:23.702863 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-llwt8_454bd674-ee00-420b-9910-16fe062ea116/extract-content/0.log" Feb 02 18:21:23 crc kubenswrapper[4858]: I0202 18:21:23.902621 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-llwt8_454bd674-ee00-420b-9910-16fe062ea116/extract-content/0.log" Feb 02 18:21:23 crc kubenswrapper[4858]: I0202 18:21:23.946816 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-llwt8_454bd674-ee00-420b-9910-16fe062ea116/extract-utilities/0.log" Feb 02 18:21:24 crc kubenswrapper[4858]: I0202 18:21:24.384729 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-llwt8_454bd674-ee00-420b-9910-16fe062ea116/registry-server/0.log" Feb 02 18:21:25 crc kubenswrapper[4858]: I0202 18:21:25.587366 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xxjww" podUID="10774e06-c012-4165-aefd-2c9ae6117134" containerName="registry-server" containerID="cri-o://15e43e851146dd523ae9f6142b67ea5472c40b2eb603cba0f902dd928f4f7914" gracePeriod=2 Feb 02 18:21:26 crc kubenswrapper[4858]: I0202 18:21:26.165808 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xxjww" Feb 02 18:21:26 crc kubenswrapper[4858]: I0202 18:21:26.261844 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10774e06-c012-4165-aefd-2c9ae6117134-utilities\") pod \"10774e06-c012-4165-aefd-2c9ae6117134\" (UID: \"10774e06-c012-4165-aefd-2c9ae6117134\") " Feb 02 18:21:26 crc kubenswrapper[4858]: I0202 18:21:26.262288 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10774e06-c012-4165-aefd-2c9ae6117134-catalog-content\") pod \"10774e06-c012-4165-aefd-2c9ae6117134\" (UID: \"10774e06-c012-4165-aefd-2c9ae6117134\") " Feb 02 18:21:26 crc kubenswrapper[4858]: I0202 18:21:26.262344 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-np5cs\" (UniqueName: \"kubernetes.io/projected/10774e06-c012-4165-aefd-2c9ae6117134-kube-api-access-np5cs\") pod \"10774e06-c012-4165-aefd-2c9ae6117134\" (UID: \"10774e06-c012-4165-aefd-2c9ae6117134\") " Feb 02 18:21:26 crc kubenswrapper[4858]: I0202 18:21:26.263094 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10774e06-c012-4165-aefd-2c9ae6117134-utilities" (OuterVolumeSpecName: "utilities") pod "10774e06-c012-4165-aefd-2c9ae6117134" (UID: "10774e06-c012-4165-aefd-2c9ae6117134"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 18:21:26 crc kubenswrapper[4858]: I0202 18:21:26.264058 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10774e06-c012-4165-aefd-2c9ae6117134-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 18:21:26 crc kubenswrapper[4858]: I0202 18:21:26.269859 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10774e06-c012-4165-aefd-2c9ae6117134-kube-api-access-np5cs" (OuterVolumeSpecName: "kube-api-access-np5cs") pod "10774e06-c012-4165-aefd-2c9ae6117134" (UID: "10774e06-c012-4165-aefd-2c9ae6117134"). InnerVolumeSpecName "kube-api-access-np5cs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 18:21:26 crc kubenswrapper[4858]: I0202 18:21:26.287893 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10774e06-c012-4165-aefd-2c9ae6117134-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "10774e06-c012-4165-aefd-2c9ae6117134" (UID: "10774e06-c012-4165-aefd-2c9ae6117134"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 18:21:26 crc kubenswrapper[4858]: I0202 18:21:26.365653 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10774e06-c012-4165-aefd-2c9ae6117134-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 18:21:26 crc kubenswrapper[4858]: I0202 18:21:26.365706 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-np5cs\" (UniqueName: \"kubernetes.io/projected/10774e06-c012-4165-aefd-2c9ae6117134-kube-api-access-np5cs\") on node \"crc\" DevicePath \"\"" Feb 02 18:21:26 crc kubenswrapper[4858]: I0202 18:21:26.603327 4858 generic.go:334] "Generic (PLEG): container finished" podID="10774e06-c012-4165-aefd-2c9ae6117134" containerID="15e43e851146dd523ae9f6142b67ea5472c40b2eb603cba0f902dd928f4f7914" exitCode=0 Feb 02 18:21:26 crc kubenswrapper[4858]: I0202 18:21:26.603384 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xxjww" event={"ID":"10774e06-c012-4165-aefd-2c9ae6117134","Type":"ContainerDied","Data":"15e43e851146dd523ae9f6142b67ea5472c40b2eb603cba0f902dd928f4f7914"} Feb 02 18:21:26 crc kubenswrapper[4858]: I0202 18:21:26.603428 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xxjww" event={"ID":"10774e06-c012-4165-aefd-2c9ae6117134","Type":"ContainerDied","Data":"26f96c98e8d7d2c9ef9fdb7b2ce8bb394d17bb71558bbc2431f8c4102a300d1e"} Feb 02 18:21:26 crc kubenswrapper[4858]: I0202 18:21:26.603454 4858 scope.go:117] "RemoveContainer" containerID="15e43e851146dd523ae9f6142b67ea5472c40b2eb603cba0f902dd928f4f7914" Feb 02 18:21:26 crc kubenswrapper[4858]: I0202 18:21:26.603455 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xxjww" Feb 02 18:21:26 crc kubenswrapper[4858]: I0202 18:21:26.630390 4858 scope.go:117] "RemoveContainer" containerID="107f1f8fa5b79116a9355f507ff4e4720eb1ce0a889f2a222a2c8a9df261fd4b" Feb 02 18:21:26 crc kubenswrapper[4858]: I0202 18:21:26.635301 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xxjww"] Feb 02 18:21:26 crc kubenswrapper[4858]: I0202 18:21:26.643596 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xxjww"] Feb 02 18:21:26 crc kubenswrapper[4858]: I0202 18:21:26.649598 4858 scope.go:117] "RemoveContainer" containerID="8ddd7bd29e37257a69c9a84494b1bc8efa58659081d5c262d4a84f3904247fb7" Feb 02 18:21:26 crc kubenswrapper[4858]: I0202 18:21:26.697817 4858 scope.go:117] "RemoveContainer" containerID="15e43e851146dd523ae9f6142b67ea5472c40b2eb603cba0f902dd928f4f7914" Feb 02 18:21:26 crc kubenswrapper[4858]: E0202 18:21:26.698313 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15e43e851146dd523ae9f6142b67ea5472c40b2eb603cba0f902dd928f4f7914\": container with ID starting with 15e43e851146dd523ae9f6142b67ea5472c40b2eb603cba0f902dd928f4f7914 not found: ID does not exist" containerID="15e43e851146dd523ae9f6142b67ea5472c40b2eb603cba0f902dd928f4f7914" Feb 02 18:21:26 crc kubenswrapper[4858]: I0202 18:21:26.698365 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15e43e851146dd523ae9f6142b67ea5472c40b2eb603cba0f902dd928f4f7914"} err="failed to get container status \"15e43e851146dd523ae9f6142b67ea5472c40b2eb603cba0f902dd928f4f7914\": rpc error: code = NotFound desc = could not find container \"15e43e851146dd523ae9f6142b67ea5472c40b2eb603cba0f902dd928f4f7914\": container with ID starting with 15e43e851146dd523ae9f6142b67ea5472c40b2eb603cba0f902dd928f4f7914 not found: ID does not exist" Feb 02 18:21:26 crc kubenswrapper[4858]: I0202 18:21:26.698399 4858 scope.go:117] "RemoveContainer" containerID="107f1f8fa5b79116a9355f507ff4e4720eb1ce0a889f2a222a2c8a9df261fd4b" Feb 02 18:21:26 crc kubenswrapper[4858]: E0202 18:21:26.698942 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"107f1f8fa5b79116a9355f507ff4e4720eb1ce0a889f2a222a2c8a9df261fd4b\": container with ID starting with 107f1f8fa5b79116a9355f507ff4e4720eb1ce0a889f2a222a2c8a9df261fd4b not found: ID does not exist" containerID="107f1f8fa5b79116a9355f507ff4e4720eb1ce0a889f2a222a2c8a9df261fd4b" Feb 02 18:21:26 crc kubenswrapper[4858]: I0202 18:21:26.699026 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"107f1f8fa5b79116a9355f507ff4e4720eb1ce0a889f2a222a2c8a9df261fd4b"} err="failed to get container status \"107f1f8fa5b79116a9355f507ff4e4720eb1ce0a889f2a222a2c8a9df261fd4b\": rpc error: code = NotFound desc = could not find container \"107f1f8fa5b79116a9355f507ff4e4720eb1ce0a889f2a222a2c8a9df261fd4b\": container with ID starting with 107f1f8fa5b79116a9355f507ff4e4720eb1ce0a889f2a222a2c8a9df261fd4b not found: ID does not exist" Feb 02 18:21:26 crc kubenswrapper[4858]: I0202 18:21:26.699060 4858 scope.go:117] "RemoveContainer" containerID="8ddd7bd29e37257a69c9a84494b1bc8efa58659081d5c262d4a84f3904247fb7" Feb 02 18:21:26 crc kubenswrapper[4858]: E0202 18:21:26.699428 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ddd7bd29e37257a69c9a84494b1bc8efa58659081d5c262d4a84f3904247fb7\": container with ID starting with 8ddd7bd29e37257a69c9a84494b1bc8efa58659081d5c262d4a84f3904247fb7 not found: ID does not exist" containerID="8ddd7bd29e37257a69c9a84494b1bc8efa58659081d5c262d4a84f3904247fb7" Feb 02 18:21:26 crc kubenswrapper[4858]: I0202 18:21:26.699470 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ddd7bd29e37257a69c9a84494b1bc8efa58659081d5c262d4a84f3904247fb7"} err="failed to get container status \"8ddd7bd29e37257a69c9a84494b1bc8efa58659081d5c262d4a84f3904247fb7\": rpc error: code = NotFound desc = could not find container \"8ddd7bd29e37257a69c9a84494b1bc8efa58659081d5c262d4a84f3904247fb7\": container with ID starting with 8ddd7bd29e37257a69c9a84494b1bc8efa58659081d5c262d4a84f3904247fb7 not found: ID does not exist" Feb 02 18:21:28 crc kubenswrapper[4858]: I0202 18:21:28.423711 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10774e06-c012-4165-aefd-2c9ae6117134" path="/var/lib/kubelet/pods/10774e06-c012-4165-aefd-2c9ae6117134/volumes" Feb 02 18:21:32 crc kubenswrapper[4858]: I0202 18:21:32.406157 4858 scope.go:117] "RemoveContainer" containerID="2dceb4d9313d305ccb6d7b2e00d45b1d22fccca33728887d2fdd371ccc924f5c" Feb 02 18:21:32 crc kubenswrapper[4858]: E0202 18:21:32.406855 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:21:44 crc kubenswrapper[4858]: I0202 18:21:44.401185 4858 scope.go:117] "RemoveContainer" containerID="2dceb4d9313d305ccb6d7b2e00d45b1d22fccca33728887d2fdd371ccc924f5c" Feb 02 18:21:44 crc kubenswrapper[4858]: E0202 18:21:44.401996 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:21:56 crc kubenswrapper[4858]: I0202 18:21:56.401517 4858 scope.go:117] "RemoveContainer" containerID="2dceb4d9313d305ccb6d7b2e00d45b1d22fccca33728887d2fdd371ccc924f5c" Feb 02 18:21:56 crc kubenswrapper[4858]: E0202 18:21:56.402408 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:22:10 crc kubenswrapper[4858]: I0202 18:22:10.439204 4858 scope.go:117] "RemoveContainer" containerID="2dceb4d9313d305ccb6d7b2e00d45b1d22fccca33728887d2fdd371ccc924f5c" Feb 02 18:22:10 crc kubenswrapper[4858]: E0202 18:22:10.452838 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:22:15 crc kubenswrapper[4858]: I0202 18:22:15.283184 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-7748685595-fdxjj" podUID="6cdd18b7-595d-4635-9a17-32be92896da1" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 02 18:22:21 crc kubenswrapper[4858]: I0202 18:22:21.404073 4858 scope.go:117] "RemoveContainer" containerID="2dceb4d9313d305ccb6d7b2e00d45b1d22fccca33728887d2fdd371ccc924f5c" Feb 02 18:22:21 crc kubenswrapper[4858]: E0202 18:22:21.404883 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:22:32 crc kubenswrapper[4858]: I0202 18:22:32.401394 4858 scope.go:117] "RemoveContainer" containerID="2dceb4d9313d305ccb6d7b2e00d45b1d22fccca33728887d2fdd371ccc924f5c" Feb 02 18:22:32 crc kubenswrapper[4858]: E0202 18:22:32.402276 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:22:47 crc kubenswrapper[4858]: I0202 18:22:47.401078 4858 scope.go:117] "RemoveContainer" containerID="2dceb4d9313d305ccb6d7b2e00d45b1d22fccca33728887d2fdd371ccc924f5c" Feb 02 18:22:47 crc kubenswrapper[4858]: E0202 18:22:47.401811 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:22:58 crc kubenswrapper[4858]: I0202 18:22:58.401534 4858 scope.go:117] "RemoveContainer" containerID="2dceb4d9313d305ccb6d7b2e00d45b1d22fccca33728887d2fdd371ccc924f5c" Feb 02 18:22:58 crc kubenswrapper[4858]: E0202 18:22:58.402792 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:23:13 crc kubenswrapper[4858]: I0202 18:23:13.400787 4858 scope.go:117] "RemoveContainer" containerID="2dceb4d9313d305ccb6d7b2e00d45b1d22fccca33728887d2fdd371ccc924f5c" Feb 02 18:23:13 crc kubenswrapper[4858]: E0202 18:23:13.402727 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:23:23 crc kubenswrapper[4858]: I0202 18:23:23.892995 4858 generic.go:334] "Generic (PLEG): container finished" podID="f2eb319c-5edd-4a70-a7d6-4c295048bbed" containerID="32e41c2432e845bea4b6fd4d8eb19aba1f33810192130dfb1bb3e4090df5326b" exitCode=0 Feb 02 18:23:23 crc kubenswrapper[4858]: I0202 18:23:23.893057 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4ln74/must-gather-rtvv9" event={"ID":"f2eb319c-5edd-4a70-a7d6-4c295048bbed","Type":"ContainerDied","Data":"32e41c2432e845bea4b6fd4d8eb19aba1f33810192130dfb1bb3e4090df5326b"} Feb 02 18:23:23 crc kubenswrapper[4858]: I0202 18:23:23.894223 4858 scope.go:117] "RemoveContainer" containerID="32e41c2432e845bea4b6fd4d8eb19aba1f33810192130dfb1bb3e4090df5326b" Feb 02 18:23:24 crc kubenswrapper[4858]: I0202 18:23:24.140753 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4ln74_must-gather-rtvv9_f2eb319c-5edd-4a70-a7d6-4c295048bbed/gather/0.log" Feb 02 18:23:28 crc kubenswrapper[4858]: I0202 18:23:28.402218 4858 scope.go:117] "RemoveContainer" containerID="2dceb4d9313d305ccb6d7b2e00d45b1d22fccca33728887d2fdd371ccc924f5c" Feb 02 18:23:28 crc kubenswrapper[4858]: E0202 18:23:28.402918 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:23:36 crc kubenswrapper[4858]: I0202 18:23:36.140360 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4ln74/must-gather-rtvv9"] Feb 02 18:23:36 crc kubenswrapper[4858]: I0202 18:23:36.142101 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-4ln74/must-gather-rtvv9" podUID="f2eb319c-5edd-4a70-a7d6-4c295048bbed" containerName="copy" containerID="cri-o://154bb6b1fde0089f229ab6e3319ef34257ae9b87b23537bd473ba60ce2c3f71b" gracePeriod=2 Feb 02 18:23:36 crc kubenswrapper[4858]: I0202 18:23:36.157329 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4ln74/must-gather-rtvv9"] Feb 02 18:23:36 crc kubenswrapper[4858]: I0202 18:23:36.629966 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4ln74_must-gather-rtvv9_f2eb319c-5edd-4a70-a7d6-4c295048bbed/copy/0.log" Feb 02 18:23:36 crc kubenswrapper[4858]: I0202 18:23:36.630782 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4ln74/must-gather-rtvv9" Feb 02 18:23:36 crc kubenswrapper[4858]: I0202 18:23:36.800698 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9l56n\" (UniqueName: \"kubernetes.io/projected/f2eb319c-5edd-4a70-a7d6-4c295048bbed-kube-api-access-9l56n\") pod \"f2eb319c-5edd-4a70-a7d6-4c295048bbed\" (UID: \"f2eb319c-5edd-4a70-a7d6-4c295048bbed\") " Feb 02 18:23:36 crc kubenswrapper[4858]: I0202 18:23:36.800758 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f2eb319c-5edd-4a70-a7d6-4c295048bbed-must-gather-output\") pod \"f2eb319c-5edd-4a70-a7d6-4c295048bbed\" (UID: \"f2eb319c-5edd-4a70-a7d6-4c295048bbed\") " Feb 02 18:23:36 crc kubenswrapper[4858]: I0202 18:23:36.957663 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2eb319c-5edd-4a70-a7d6-4c295048bbed-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "f2eb319c-5edd-4a70-a7d6-4c295048bbed" (UID: "f2eb319c-5edd-4a70-a7d6-4c295048bbed"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 18:23:37 crc kubenswrapper[4858]: I0202 18:23:37.005546 4858 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f2eb319c-5edd-4a70-a7d6-4c295048bbed-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 02 18:23:37 crc kubenswrapper[4858]: I0202 18:23:37.021873 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4ln74_must-gather-rtvv9_f2eb319c-5edd-4a70-a7d6-4c295048bbed/copy/0.log" Feb 02 18:23:37 crc kubenswrapper[4858]: I0202 18:23:37.022527 4858 generic.go:334] "Generic (PLEG): container finished" podID="f2eb319c-5edd-4a70-a7d6-4c295048bbed" containerID="154bb6b1fde0089f229ab6e3319ef34257ae9b87b23537bd473ba60ce2c3f71b" exitCode=143 Feb 02 18:23:37 crc kubenswrapper[4858]: I0202 18:23:37.022613 4858 scope.go:117] "RemoveContainer" containerID="154bb6b1fde0089f229ab6e3319ef34257ae9b87b23537bd473ba60ce2c3f71b" Feb 02 18:23:37 crc kubenswrapper[4858]: I0202 18:23:37.022652 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4ln74/must-gather-rtvv9" Feb 02 18:23:37 crc kubenswrapper[4858]: I0202 18:23:37.047728 4858 scope.go:117] "RemoveContainer" containerID="32e41c2432e845bea4b6fd4d8eb19aba1f33810192130dfb1bb3e4090df5326b" Feb 02 18:23:37 crc kubenswrapper[4858]: I0202 18:23:37.204867 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2eb319c-5edd-4a70-a7d6-4c295048bbed-kube-api-access-9l56n" (OuterVolumeSpecName: "kube-api-access-9l56n") pod "f2eb319c-5edd-4a70-a7d6-4c295048bbed" (UID: "f2eb319c-5edd-4a70-a7d6-4c295048bbed"). InnerVolumeSpecName "kube-api-access-9l56n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 18:23:37 crc kubenswrapper[4858]: I0202 18:23:37.210386 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9l56n\" (UniqueName: \"kubernetes.io/projected/f2eb319c-5edd-4a70-a7d6-4c295048bbed-kube-api-access-9l56n\") on node \"crc\" DevicePath \"\"" Feb 02 18:23:37 crc kubenswrapper[4858]: I0202 18:23:37.295733 4858 scope.go:117] "RemoveContainer" containerID="154bb6b1fde0089f229ab6e3319ef34257ae9b87b23537bd473ba60ce2c3f71b" Feb 02 18:23:37 crc kubenswrapper[4858]: E0202 18:23:37.298543 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"154bb6b1fde0089f229ab6e3319ef34257ae9b87b23537bd473ba60ce2c3f71b\": container with ID starting with 154bb6b1fde0089f229ab6e3319ef34257ae9b87b23537bd473ba60ce2c3f71b not found: ID does not exist" containerID="154bb6b1fde0089f229ab6e3319ef34257ae9b87b23537bd473ba60ce2c3f71b" Feb 02 18:23:37 crc kubenswrapper[4858]: I0202 18:23:37.298585 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"154bb6b1fde0089f229ab6e3319ef34257ae9b87b23537bd473ba60ce2c3f71b"} err="failed to get container status \"154bb6b1fde0089f229ab6e3319ef34257ae9b87b23537bd473ba60ce2c3f71b\": rpc error: code = NotFound desc = could not find container \"154bb6b1fde0089f229ab6e3319ef34257ae9b87b23537bd473ba60ce2c3f71b\": container with ID starting with 154bb6b1fde0089f229ab6e3319ef34257ae9b87b23537bd473ba60ce2c3f71b not found: ID does not exist" Feb 02 18:23:37 crc kubenswrapper[4858]: I0202 18:23:37.298606 4858 scope.go:117] "RemoveContainer" containerID="32e41c2432e845bea4b6fd4d8eb19aba1f33810192130dfb1bb3e4090df5326b" Feb 02 18:23:37 crc kubenswrapper[4858]: E0202 18:23:37.298990 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32e41c2432e845bea4b6fd4d8eb19aba1f33810192130dfb1bb3e4090df5326b\": container with ID starting with 32e41c2432e845bea4b6fd4d8eb19aba1f33810192130dfb1bb3e4090df5326b not found: ID does not exist" containerID="32e41c2432e845bea4b6fd4d8eb19aba1f33810192130dfb1bb3e4090df5326b" Feb 02 18:23:37 crc kubenswrapper[4858]: I0202 18:23:37.299023 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32e41c2432e845bea4b6fd4d8eb19aba1f33810192130dfb1bb3e4090df5326b"} err="failed to get container status \"32e41c2432e845bea4b6fd4d8eb19aba1f33810192130dfb1bb3e4090df5326b\": rpc error: code = NotFound desc = could not find container \"32e41c2432e845bea4b6fd4d8eb19aba1f33810192130dfb1bb3e4090df5326b\": container with ID starting with 32e41c2432e845bea4b6fd4d8eb19aba1f33810192130dfb1bb3e4090df5326b not found: ID does not exist" Feb 02 18:23:38 crc kubenswrapper[4858]: I0202 18:23:38.414620 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2eb319c-5edd-4a70-a7d6-4c295048bbed" path="/var/lib/kubelet/pods/f2eb319c-5edd-4a70-a7d6-4c295048bbed/volumes" Feb 02 18:23:43 crc kubenswrapper[4858]: I0202 18:23:43.401654 4858 scope.go:117] "RemoveContainer" containerID="2dceb4d9313d305ccb6d7b2e00d45b1d22fccca33728887d2fdd371ccc924f5c" Feb 02 18:23:43 crc kubenswrapper[4858]: E0202 18:23:43.402728 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lbvl2_openshift-machine-config-operator(d03a4872-ca6a-4233-bdbf-b31f7890dc3e)\"" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" podUID="d03a4872-ca6a-4233-bdbf-b31f7890dc3e" Feb 02 18:23:58 crc kubenswrapper[4858]: I0202 18:23:58.404526 4858 scope.go:117] "RemoveContainer" containerID="2dceb4d9313d305ccb6d7b2e00d45b1d22fccca33728887d2fdd371ccc924f5c" Feb 02 18:23:59 crc kubenswrapper[4858]: I0202 18:23:59.227344 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lbvl2" event={"ID":"d03a4872-ca6a-4233-bdbf-b31f7890dc3e","Type":"ContainerStarted","Data":"7a517e2678d17d09fcc37586b4c868ef231a614e555e3888b9941ae8eaed6407"} Feb 02 18:24:12 crc kubenswrapper[4858]: I0202 18:24:12.364529 4858 scope.go:117] "RemoveContainer" containerID="c36bcc2e87ba5ebeb0b93c62f27571adea2297acc3d96b66a0145c0d7d5b37ef" Feb 02 18:24:24 crc kubenswrapper[4858]: I0202 18:24:24.599375 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-k8tb8"] Feb 02 18:24:24 crc kubenswrapper[4858]: E0202 18:24:24.600583 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10774e06-c012-4165-aefd-2c9ae6117134" containerName="extract-utilities" Feb 02 18:24:24 crc kubenswrapper[4858]: I0202 18:24:24.600651 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="10774e06-c012-4165-aefd-2c9ae6117134" containerName="extract-utilities" Feb 02 18:24:24 crc kubenswrapper[4858]: E0202 18:24:24.600663 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10774e06-c012-4165-aefd-2c9ae6117134" containerName="registry-server" Feb 02 18:24:24 crc kubenswrapper[4858]: I0202 18:24:24.600671 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="10774e06-c012-4165-aefd-2c9ae6117134" containerName="registry-server" Feb 02 18:24:24 crc kubenswrapper[4858]: E0202 18:24:24.600717 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10774e06-c012-4165-aefd-2c9ae6117134" containerName="extract-content" Feb 02 18:24:24 crc kubenswrapper[4858]: I0202 18:24:24.600725 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="10774e06-c012-4165-aefd-2c9ae6117134" containerName="extract-content" Feb 02 18:24:24 crc kubenswrapper[4858]: E0202 18:24:24.600773 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2eb319c-5edd-4a70-a7d6-4c295048bbed" containerName="gather" Feb 02 18:24:24 crc kubenswrapper[4858]: I0202 18:24:24.600781 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2eb319c-5edd-4a70-a7d6-4c295048bbed" containerName="gather" Feb 02 18:24:24 crc kubenswrapper[4858]: E0202 18:24:24.600820 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2eb319c-5edd-4a70-a7d6-4c295048bbed" containerName="copy" Feb 02 18:24:24 crc kubenswrapper[4858]: I0202 18:24:24.600827 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2eb319c-5edd-4a70-a7d6-4c295048bbed" containerName="copy" Feb 02 18:24:24 crc kubenswrapper[4858]: I0202 18:24:24.601456 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2eb319c-5edd-4a70-a7d6-4c295048bbed" containerName="copy" Feb 02 18:24:24 crc kubenswrapper[4858]: I0202 18:24:24.601494 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2eb319c-5edd-4a70-a7d6-4c295048bbed" containerName="gather" Feb 02 18:24:24 crc kubenswrapper[4858]: I0202 18:24:24.601503 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="10774e06-c012-4165-aefd-2c9ae6117134" containerName="registry-server" Feb 02 18:24:24 crc kubenswrapper[4858]: I0202 18:24:24.604263 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k8tb8" Feb 02 18:24:24 crc kubenswrapper[4858]: I0202 18:24:24.615010 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k8tb8"] Feb 02 18:24:24 crc kubenswrapper[4858]: I0202 18:24:24.712391 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2447x\" (UniqueName: \"kubernetes.io/projected/422a9ad9-998c-429a-adb8-5914ca73b9a1-kube-api-access-2447x\") pod \"certified-operators-k8tb8\" (UID: \"422a9ad9-998c-429a-adb8-5914ca73b9a1\") " pod="openshift-marketplace/certified-operators-k8tb8" Feb 02 18:24:24 crc kubenswrapper[4858]: I0202 18:24:24.712439 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/422a9ad9-998c-429a-adb8-5914ca73b9a1-catalog-content\") pod \"certified-operators-k8tb8\" (UID: \"422a9ad9-998c-429a-adb8-5914ca73b9a1\") " pod="openshift-marketplace/certified-operators-k8tb8" Feb 02 18:24:24 crc kubenswrapper[4858]: I0202 18:24:24.712530 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/422a9ad9-998c-429a-adb8-5914ca73b9a1-utilities\") pod \"certified-operators-k8tb8\" (UID: \"422a9ad9-998c-429a-adb8-5914ca73b9a1\") " pod="openshift-marketplace/certified-operators-k8tb8" Feb 02 18:24:24 crc kubenswrapper[4858]: I0202 18:24:24.814235 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2447x\" (UniqueName: \"kubernetes.io/projected/422a9ad9-998c-429a-adb8-5914ca73b9a1-kube-api-access-2447x\") pod \"certified-operators-k8tb8\" (UID: \"422a9ad9-998c-429a-adb8-5914ca73b9a1\") " pod="openshift-marketplace/certified-operators-k8tb8" Feb 02 18:24:24 crc kubenswrapper[4858]: I0202 18:24:24.814294 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/422a9ad9-998c-429a-adb8-5914ca73b9a1-catalog-content\") pod \"certified-operators-k8tb8\" (UID: \"422a9ad9-998c-429a-adb8-5914ca73b9a1\") " pod="openshift-marketplace/certified-operators-k8tb8" Feb 02 18:24:24 crc kubenswrapper[4858]: I0202 18:24:24.814384 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/422a9ad9-998c-429a-adb8-5914ca73b9a1-utilities\") pod \"certified-operators-k8tb8\" (UID: \"422a9ad9-998c-429a-adb8-5914ca73b9a1\") " pod="openshift-marketplace/certified-operators-k8tb8" Feb 02 18:24:24 crc kubenswrapper[4858]: I0202 18:24:24.814875 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/422a9ad9-998c-429a-adb8-5914ca73b9a1-utilities\") pod \"certified-operators-k8tb8\" (UID: \"422a9ad9-998c-429a-adb8-5914ca73b9a1\") " pod="openshift-marketplace/certified-operators-k8tb8" Feb 02 18:24:24 crc kubenswrapper[4858]: I0202 18:24:24.814880 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/422a9ad9-998c-429a-adb8-5914ca73b9a1-catalog-content\") pod \"certified-operators-k8tb8\" (UID: \"422a9ad9-998c-429a-adb8-5914ca73b9a1\") " pod="openshift-marketplace/certified-operators-k8tb8" Feb 02 18:24:24 crc kubenswrapper[4858]: I0202 18:24:24.838509 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2447x\" (UniqueName: \"kubernetes.io/projected/422a9ad9-998c-429a-adb8-5914ca73b9a1-kube-api-access-2447x\") pod \"certified-operators-k8tb8\" (UID: \"422a9ad9-998c-429a-adb8-5914ca73b9a1\") " pod="openshift-marketplace/certified-operators-k8tb8" Feb 02 18:24:24 crc kubenswrapper[4858]: I0202 18:24:24.938615 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k8tb8" Feb 02 18:24:25 crc kubenswrapper[4858]: I0202 18:24:25.441222 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k8tb8"] Feb 02 18:24:25 crc kubenswrapper[4858]: I0202 18:24:25.500203 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k8tb8" event={"ID":"422a9ad9-998c-429a-adb8-5914ca73b9a1","Type":"ContainerStarted","Data":"f7408224df5eee33a7fbc44413d64202a8a3eab7c5b2aa7931205fcf83066f85"} Feb 02 18:24:26 crc kubenswrapper[4858]: I0202 18:24:26.510054 4858 generic.go:334] "Generic (PLEG): container finished" podID="422a9ad9-998c-429a-adb8-5914ca73b9a1" containerID="06eed9eb1058da5381f8ba82e6c2dfd71594a1bb1d9aa49fa65b3bf770501547" exitCode=0 Feb 02 18:24:26 crc kubenswrapper[4858]: I0202 18:24:26.510309 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k8tb8" event={"ID":"422a9ad9-998c-429a-adb8-5914ca73b9a1","Type":"ContainerDied","Data":"06eed9eb1058da5381f8ba82e6c2dfd71594a1bb1d9aa49fa65b3bf770501547"} Feb 02 18:24:27 crc kubenswrapper[4858]: I0202 18:24:27.519251 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k8tb8" event={"ID":"422a9ad9-998c-429a-adb8-5914ca73b9a1","Type":"ContainerStarted","Data":"599c51e83f2ee22652f9b54f72f56ff12ef8ea5fe9383b3e60f4660104f46d7b"} Feb 02 18:24:28 crc kubenswrapper[4858]: I0202 18:24:28.532547 4858 generic.go:334] "Generic (PLEG): container finished" podID="422a9ad9-998c-429a-adb8-5914ca73b9a1" containerID="599c51e83f2ee22652f9b54f72f56ff12ef8ea5fe9383b3e60f4660104f46d7b" exitCode=0 Feb 02 18:24:28 crc kubenswrapper[4858]: I0202 18:24:28.532614 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k8tb8" event={"ID":"422a9ad9-998c-429a-adb8-5914ca73b9a1","Type":"ContainerDied","Data":"599c51e83f2ee22652f9b54f72f56ff12ef8ea5fe9383b3e60f4660104f46d7b"} Feb 02 18:24:29 crc kubenswrapper[4858]: I0202 18:24:29.557673 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k8tb8" event={"ID":"422a9ad9-998c-429a-adb8-5914ca73b9a1","Type":"ContainerStarted","Data":"3d562d041a27bfc250ae536345b8054926a4edb29ec78cb212f6b44d8bd5329f"} Feb 02 18:24:29 crc kubenswrapper[4858]: I0202 18:24:29.584129 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-k8tb8" podStartSLOduration=3.104508234 podStartE2EDuration="5.584104849s" podCreationTimestamp="2026-02-02 18:24:24 +0000 UTC" firstStartedPulling="2026-02-02 18:24:26.512460436 +0000 UTC m=+4167.664875711" lastFinishedPulling="2026-02-02 18:24:28.992057061 +0000 UTC m=+4170.144472326" observedRunningTime="2026-02-02 18:24:29.575960296 +0000 UTC m=+4170.728375561" watchObservedRunningTime="2026-02-02 18:24:29.584104849 +0000 UTC m=+4170.736520114" Feb 02 18:24:32 crc kubenswrapper[4858]: I0202 18:24:32.065884 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m7ntv"] Feb 02 18:24:32 crc kubenswrapper[4858]: I0202 18:24:32.090873 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m7ntv" Feb 02 18:24:32 crc kubenswrapper[4858]: I0202 18:24:32.123722 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m7ntv"] Feb 02 18:24:32 crc kubenswrapper[4858]: I0202 18:24:32.242779 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76mk9\" (UniqueName: \"kubernetes.io/projected/ac545aa5-a96a-4431-974d-01dc707da19b-kube-api-access-76mk9\") pod \"community-operators-m7ntv\" (UID: \"ac545aa5-a96a-4431-974d-01dc707da19b\") " pod="openshift-marketplace/community-operators-m7ntv" Feb 02 18:24:32 crc kubenswrapper[4858]: I0202 18:24:32.243115 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac545aa5-a96a-4431-974d-01dc707da19b-catalog-content\") pod \"community-operators-m7ntv\" (UID: \"ac545aa5-a96a-4431-974d-01dc707da19b\") " pod="openshift-marketplace/community-operators-m7ntv" Feb 02 18:24:32 crc kubenswrapper[4858]: I0202 18:24:32.243235 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac545aa5-a96a-4431-974d-01dc707da19b-utilities\") pod \"community-operators-m7ntv\" (UID: \"ac545aa5-a96a-4431-974d-01dc707da19b\") " pod="openshift-marketplace/community-operators-m7ntv" Feb 02 18:24:32 crc kubenswrapper[4858]: I0202 18:24:32.345254 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76mk9\" (UniqueName: \"kubernetes.io/projected/ac545aa5-a96a-4431-974d-01dc707da19b-kube-api-access-76mk9\") pod \"community-operators-m7ntv\" (UID: \"ac545aa5-a96a-4431-974d-01dc707da19b\") " pod="openshift-marketplace/community-operators-m7ntv" Feb 02 18:24:32 crc kubenswrapper[4858]: I0202 18:24:32.345885 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac545aa5-a96a-4431-974d-01dc707da19b-catalog-content\") pod \"community-operators-m7ntv\" (UID: \"ac545aa5-a96a-4431-974d-01dc707da19b\") " pod="openshift-marketplace/community-operators-m7ntv" Feb 02 18:24:32 crc kubenswrapper[4858]: I0202 18:24:32.345923 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac545aa5-a96a-4431-974d-01dc707da19b-utilities\") pod \"community-operators-m7ntv\" (UID: \"ac545aa5-a96a-4431-974d-01dc707da19b\") " pod="openshift-marketplace/community-operators-m7ntv" Feb 02 18:24:32 crc kubenswrapper[4858]: I0202 18:24:32.346647 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac545aa5-a96a-4431-974d-01dc707da19b-catalog-content\") pod \"community-operators-m7ntv\" (UID: \"ac545aa5-a96a-4431-974d-01dc707da19b\") " pod="openshift-marketplace/community-operators-m7ntv" Feb 02 18:24:32 crc kubenswrapper[4858]: I0202 18:24:32.346706 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac545aa5-a96a-4431-974d-01dc707da19b-utilities\") pod \"community-operators-m7ntv\" (UID: \"ac545aa5-a96a-4431-974d-01dc707da19b\") " pod="openshift-marketplace/community-operators-m7ntv" Feb 02 18:24:32 crc kubenswrapper[4858]: I0202 18:24:32.367373 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76mk9\" (UniqueName: \"kubernetes.io/projected/ac545aa5-a96a-4431-974d-01dc707da19b-kube-api-access-76mk9\") pod \"community-operators-m7ntv\" (UID: \"ac545aa5-a96a-4431-974d-01dc707da19b\") " pod="openshift-marketplace/community-operators-m7ntv" Feb 02 18:24:32 crc kubenswrapper[4858]: I0202 18:24:32.418562 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m7ntv" Feb 02 18:24:32 crc kubenswrapper[4858]: I0202 18:24:32.952734 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m7ntv"] Feb 02 18:24:32 crc kubenswrapper[4858]: W0202 18:24:32.955300 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac545aa5_a96a_4431_974d_01dc707da19b.slice/crio-96e64652956365a40f621e58dd7848af2932b26ade219d5260b6cd5b50b46df8 WatchSource:0}: Error finding container 96e64652956365a40f621e58dd7848af2932b26ade219d5260b6cd5b50b46df8: Status 404 returned error can't find the container with id 96e64652956365a40f621e58dd7848af2932b26ade219d5260b6cd5b50b46df8 Feb 02 18:24:33 crc kubenswrapper[4858]: I0202 18:24:33.598831 4858 generic.go:334] "Generic (PLEG): container finished" podID="ac545aa5-a96a-4431-974d-01dc707da19b" containerID="f8d00ac445a3b7c6cd82f4e10c0f9db67a014a6dc05af4e4b0b91180a52831e8" exitCode=0 Feb 02 18:24:33 crc kubenswrapper[4858]: I0202 18:24:33.598930 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7ntv" event={"ID":"ac545aa5-a96a-4431-974d-01dc707da19b","Type":"ContainerDied","Data":"f8d00ac445a3b7c6cd82f4e10c0f9db67a014a6dc05af4e4b0b91180a52831e8"} Feb 02 18:24:33 crc kubenswrapper[4858]: I0202 18:24:33.599149 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7ntv" event={"ID":"ac545aa5-a96a-4431-974d-01dc707da19b","Type":"ContainerStarted","Data":"96e64652956365a40f621e58dd7848af2932b26ade219d5260b6cd5b50b46df8"} Feb 02 18:24:34 crc kubenswrapper[4858]: I0202 18:24:34.939030 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-k8tb8" Feb 02 18:24:34 crc kubenswrapper[4858]: I0202 18:24:34.939346 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-k8tb8" Feb 02 18:24:34 crc kubenswrapper[4858]: I0202 18:24:34.996881 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-k8tb8" Feb 02 18:24:35 crc kubenswrapper[4858]: I0202 18:24:35.626706 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7ntv" event={"ID":"ac545aa5-a96a-4431-974d-01dc707da19b","Type":"ContainerStarted","Data":"96428a7d0ab65bbf0c10435861f15102f57d264fe38080c0b19149216c93be00"} Feb 02 18:24:35 crc kubenswrapper[4858]: I0202 18:24:35.682533 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-k8tb8" Feb 02 18:24:36 crc kubenswrapper[4858]: I0202 18:24:36.556587 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k8tb8"] Feb 02 18:24:36 crc kubenswrapper[4858]: I0202 18:24:36.639222 4858 generic.go:334] "Generic (PLEG): container finished" podID="ac545aa5-a96a-4431-974d-01dc707da19b" containerID="96428a7d0ab65bbf0c10435861f15102f57d264fe38080c0b19149216c93be00" exitCode=0 Feb 02 18:24:36 crc kubenswrapper[4858]: I0202 18:24:36.639499 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7ntv" event={"ID":"ac545aa5-a96a-4431-974d-01dc707da19b","Type":"ContainerDied","Data":"96428a7d0ab65bbf0c10435861f15102f57d264fe38080c0b19149216c93be00"} Feb 02 18:24:37 crc kubenswrapper[4858]: I0202 18:24:37.651435 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7ntv" event={"ID":"ac545aa5-a96a-4431-974d-01dc707da19b","Type":"ContainerStarted","Data":"57e0a9b75202dd1fff593c099eb7149c1adbb82c946329915c531b0855ad8b63"} Feb 02 18:24:37 crc kubenswrapper[4858]: I0202 18:24:37.651584 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-k8tb8" podUID="422a9ad9-998c-429a-adb8-5914ca73b9a1" containerName="registry-server" containerID="cri-o://3d562d041a27bfc250ae536345b8054926a4edb29ec78cb212f6b44d8bd5329f" gracePeriod=2 Feb 02 18:24:37 crc kubenswrapper[4858]: I0202 18:24:37.674534 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m7ntv" podStartSLOduration=2.7507758129999997 podStartE2EDuration="5.674515436s" podCreationTimestamp="2026-02-02 18:24:32 +0000 UTC" firstStartedPulling="2026-02-02 18:24:33.602112789 +0000 UTC m=+4174.754528064" lastFinishedPulling="2026-02-02 18:24:36.525852432 +0000 UTC m=+4177.678267687" observedRunningTime="2026-02-02 18:24:37.673536618 +0000 UTC m=+4178.825951893" watchObservedRunningTime="2026-02-02 18:24:37.674515436 +0000 UTC m=+4178.826930701" Feb 02 18:24:38 crc kubenswrapper[4858]: I0202 18:24:38.662781 4858 generic.go:334] "Generic (PLEG): container finished" podID="422a9ad9-998c-429a-adb8-5914ca73b9a1" containerID="3d562d041a27bfc250ae536345b8054926a4edb29ec78cb212f6b44d8bd5329f" exitCode=0 Feb 02 18:24:38 crc kubenswrapper[4858]: I0202 18:24:38.663871 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k8tb8" event={"ID":"422a9ad9-998c-429a-adb8-5914ca73b9a1","Type":"ContainerDied","Data":"3d562d041a27bfc250ae536345b8054926a4edb29ec78cb212f6b44d8bd5329f"} Feb 02 18:24:39 crc kubenswrapper[4858]: I0202 18:24:39.030361 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k8tb8" Feb 02 18:24:39 crc kubenswrapper[4858]: I0202 18:24:39.175617 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/422a9ad9-998c-429a-adb8-5914ca73b9a1-utilities\") pod \"422a9ad9-998c-429a-adb8-5914ca73b9a1\" (UID: \"422a9ad9-998c-429a-adb8-5914ca73b9a1\") " Feb 02 18:24:39 crc kubenswrapper[4858]: I0202 18:24:39.176075 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2447x\" (UniqueName: \"kubernetes.io/projected/422a9ad9-998c-429a-adb8-5914ca73b9a1-kube-api-access-2447x\") pod \"422a9ad9-998c-429a-adb8-5914ca73b9a1\" (UID: \"422a9ad9-998c-429a-adb8-5914ca73b9a1\") " Feb 02 18:24:39 crc kubenswrapper[4858]: I0202 18:24:39.176221 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/422a9ad9-998c-429a-adb8-5914ca73b9a1-catalog-content\") pod \"422a9ad9-998c-429a-adb8-5914ca73b9a1\" (UID: \"422a9ad9-998c-429a-adb8-5914ca73b9a1\") " Feb 02 18:24:39 crc kubenswrapper[4858]: I0202 18:24:39.176433 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/422a9ad9-998c-429a-adb8-5914ca73b9a1-utilities" (OuterVolumeSpecName: "utilities") pod "422a9ad9-998c-429a-adb8-5914ca73b9a1" (UID: "422a9ad9-998c-429a-adb8-5914ca73b9a1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 18:24:39 crc kubenswrapper[4858]: I0202 18:24:39.176697 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/422a9ad9-998c-429a-adb8-5914ca73b9a1-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 18:24:39 crc kubenswrapper[4858]: I0202 18:24:39.182353 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/422a9ad9-998c-429a-adb8-5914ca73b9a1-kube-api-access-2447x" (OuterVolumeSpecName: "kube-api-access-2447x") pod "422a9ad9-998c-429a-adb8-5914ca73b9a1" (UID: "422a9ad9-998c-429a-adb8-5914ca73b9a1"). InnerVolumeSpecName "kube-api-access-2447x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 18:24:39 crc kubenswrapper[4858]: I0202 18:24:39.229499 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/422a9ad9-998c-429a-adb8-5914ca73b9a1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "422a9ad9-998c-429a-adb8-5914ca73b9a1" (UID: "422a9ad9-998c-429a-adb8-5914ca73b9a1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 18:24:39 crc kubenswrapper[4858]: I0202 18:24:39.278044 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/422a9ad9-998c-429a-adb8-5914ca73b9a1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 18:24:39 crc kubenswrapper[4858]: I0202 18:24:39.278070 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2447x\" (UniqueName: \"kubernetes.io/projected/422a9ad9-998c-429a-adb8-5914ca73b9a1-kube-api-access-2447x\") on node \"crc\" DevicePath \"\"" Feb 02 18:24:39 crc kubenswrapper[4858]: I0202 18:24:39.677732 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k8tb8" event={"ID":"422a9ad9-998c-429a-adb8-5914ca73b9a1","Type":"ContainerDied","Data":"f7408224df5eee33a7fbc44413d64202a8a3eab7c5b2aa7931205fcf83066f85"} Feb 02 18:24:39 crc kubenswrapper[4858]: I0202 18:24:39.677800 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k8tb8" Feb 02 18:24:39 crc kubenswrapper[4858]: I0202 18:24:39.677805 4858 scope.go:117] "RemoveContainer" containerID="3d562d041a27bfc250ae536345b8054926a4edb29ec78cb212f6b44d8bd5329f" Feb 02 18:24:39 crc kubenswrapper[4858]: I0202 18:24:39.699637 4858 scope.go:117] "RemoveContainer" containerID="599c51e83f2ee22652f9b54f72f56ff12ef8ea5fe9383b3e60f4660104f46d7b" Feb 02 18:24:39 crc kubenswrapper[4858]: I0202 18:24:39.720935 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k8tb8"] Feb 02 18:24:39 crc kubenswrapper[4858]: I0202 18:24:39.732691 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-k8tb8"] Feb 02 18:24:39 crc kubenswrapper[4858]: I0202 18:24:39.739504 4858 scope.go:117] "RemoveContainer" containerID="06eed9eb1058da5381f8ba82e6c2dfd71594a1bb1d9aa49fa65b3bf770501547" Feb 02 18:24:40 crc kubenswrapper[4858]: I0202 18:24:40.413377 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="422a9ad9-998c-429a-adb8-5914ca73b9a1" path="/var/lib/kubelet/pods/422a9ad9-998c-429a-adb8-5914ca73b9a1/volumes" Feb 02 18:24:42 crc kubenswrapper[4858]: I0202 18:24:42.419390 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m7ntv" Feb 02 18:24:42 crc kubenswrapper[4858]: I0202 18:24:42.419440 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m7ntv" Feb 02 18:24:42 crc kubenswrapper[4858]: I0202 18:24:42.476012 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m7ntv" Feb 02 18:24:42 crc kubenswrapper[4858]: I0202 18:24:42.762472 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m7ntv" Feb 02 18:24:43 crc kubenswrapper[4858]: I0202 18:24:43.551788 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m7ntv"] Feb 02 18:24:44 crc kubenswrapper[4858]: I0202 18:24:44.727754 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m7ntv" podUID="ac545aa5-a96a-4431-974d-01dc707da19b" containerName="registry-server" containerID="cri-o://57e0a9b75202dd1fff593c099eb7149c1adbb82c946329915c531b0855ad8b63" gracePeriod=2 Feb 02 18:24:45 crc kubenswrapper[4858]: I0202 18:24:45.713401 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m7ntv" Feb 02 18:24:45 crc kubenswrapper[4858]: I0202 18:24:45.745383 4858 generic.go:334] "Generic (PLEG): container finished" podID="ac545aa5-a96a-4431-974d-01dc707da19b" containerID="57e0a9b75202dd1fff593c099eb7149c1adbb82c946329915c531b0855ad8b63" exitCode=0 Feb 02 18:24:45 crc kubenswrapper[4858]: I0202 18:24:45.745454 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m7ntv" Feb 02 18:24:45 crc kubenswrapper[4858]: I0202 18:24:45.745457 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7ntv" event={"ID":"ac545aa5-a96a-4431-974d-01dc707da19b","Type":"ContainerDied","Data":"57e0a9b75202dd1fff593c099eb7149c1adbb82c946329915c531b0855ad8b63"} Feb 02 18:24:45 crc kubenswrapper[4858]: I0202 18:24:45.745572 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7ntv" event={"ID":"ac545aa5-a96a-4431-974d-01dc707da19b","Type":"ContainerDied","Data":"96e64652956365a40f621e58dd7848af2932b26ade219d5260b6cd5b50b46df8"} Feb 02 18:24:45 crc kubenswrapper[4858]: I0202 18:24:45.745592 4858 scope.go:117] "RemoveContainer" containerID="57e0a9b75202dd1fff593c099eb7149c1adbb82c946329915c531b0855ad8b63" Feb 02 18:24:45 crc kubenswrapper[4858]: I0202 18:24:45.763812 4858 scope.go:117] "RemoveContainer" containerID="96428a7d0ab65bbf0c10435861f15102f57d264fe38080c0b19149216c93be00" Feb 02 18:24:45 crc kubenswrapper[4858]: I0202 18:24:45.785227 4858 scope.go:117] "RemoveContainer" containerID="f8d00ac445a3b7c6cd82f4e10c0f9db67a014a6dc05af4e4b0b91180a52831e8" Feb 02 18:24:45 crc kubenswrapper[4858]: I0202 18:24:45.802396 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac545aa5-a96a-4431-974d-01dc707da19b-catalog-content\") pod \"ac545aa5-a96a-4431-974d-01dc707da19b\" (UID: \"ac545aa5-a96a-4431-974d-01dc707da19b\") " Feb 02 18:24:45 crc kubenswrapper[4858]: I0202 18:24:45.802515 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac545aa5-a96a-4431-974d-01dc707da19b-utilities\") pod \"ac545aa5-a96a-4431-974d-01dc707da19b\" (UID: \"ac545aa5-a96a-4431-974d-01dc707da19b\") " Feb 02 18:24:45 crc kubenswrapper[4858]: I0202 18:24:45.802597 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76mk9\" (UniqueName: \"kubernetes.io/projected/ac545aa5-a96a-4431-974d-01dc707da19b-kube-api-access-76mk9\") pod \"ac545aa5-a96a-4431-974d-01dc707da19b\" (UID: \"ac545aa5-a96a-4431-974d-01dc707da19b\") " Feb 02 18:24:45 crc kubenswrapper[4858]: I0202 18:24:45.803820 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac545aa5-a96a-4431-974d-01dc707da19b-utilities" (OuterVolumeSpecName: "utilities") pod "ac545aa5-a96a-4431-974d-01dc707da19b" (UID: "ac545aa5-a96a-4431-974d-01dc707da19b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 18:24:45 crc kubenswrapper[4858]: I0202 18:24:45.810385 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac545aa5-a96a-4431-974d-01dc707da19b-kube-api-access-76mk9" (OuterVolumeSpecName: "kube-api-access-76mk9") pod "ac545aa5-a96a-4431-974d-01dc707da19b" (UID: "ac545aa5-a96a-4431-974d-01dc707da19b"). InnerVolumeSpecName "kube-api-access-76mk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 18:24:45 crc kubenswrapper[4858]: I0202 18:24:45.861100 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac545aa5-a96a-4431-974d-01dc707da19b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac545aa5-a96a-4431-974d-01dc707da19b" (UID: "ac545aa5-a96a-4431-974d-01dc707da19b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 18:24:45 crc kubenswrapper[4858]: I0202 18:24:45.866381 4858 scope.go:117] "RemoveContainer" containerID="57e0a9b75202dd1fff593c099eb7149c1adbb82c946329915c531b0855ad8b63" Feb 02 18:24:45 crc kubenswrapper[4858]: E0202 18:24:45.867049 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57e0a9b75202dd1fff593c099eb7149c1adbb82c946329915c531b0855ad8b63\": container with ID starting with 57e0a9b75202dd1fff593c099eb7149c1adbb82c946329915c531b0855ad8b63 not found: ID does not exist" containerID="57e0a9b75202dd1fff593c099eb7149c1adbb82c946329915c531b0855ad8b63" Feb 02 18:24:45 crc kubenswrapper[4858]: I0202 18:24:45.867097 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57e0a9b75202dd1fff593c099eb7149c1adbb82c946329915c531b0855ad8b63"} err="failed to get container status \"57e0a9b75202dd1fff593c099eb7149c1adbb82c946329915c531b0855ad8b63\": rpc error: code = NotFound desc = could not find container \"57e0a9b75202dd1fff593c099eb7149c1adbb82c946329915c531b0855ad8b63\": container with ID starting with 57e0a9b75202dd1fff593c099eb7149c1adbb82c946329915c531b0855ad8b63 not found: ID does not exist" Feb 02 18:24:45 crc kubenswrapper[4858]: I0202 18:24:45.867124 4858 scope.go:117] "RemoveContainer" containerID="96428a7d0ab65bbf0c10435861f15102f57d264fe38080c0b19149216c93be00" Feb 02 18:24:45 crc kubenswrapper[4858]: E0202 18:24:45.867487 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96428a7d0ab65bbf0c10435861f15102f57d264fe38080c0b19149216c93be00\": container with ID starting with 96428a7d0ab65bbf0c10435861f15102f57d264fe38080c0b19149216c93be00 not found: ID does not exist" containerID="96428a7d0ab65bbf0c10435861f15102f57d264fe38080c0b19149216c93be00" Feb 02 18:24:45 crc kubenswrapper[4858]: I0202 18:24:45.867530 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96428a7d0ab65bbf0c10435861f15102f57d264fe38080c0b19149216c93be00"} err="failed to get container status \"96428a7d0ab65bbf0c10435861f15102f57d264fe38080c0b19149216c93be00\": rpc error: code = NotFound desc = could not find container \"96428a7d0ab65bbf0c10435861f15102f57d264fe38080c0b19149216c93be00\": container with ID starting with 96428a7d0ab65bbf0c10435861f15102f57d264fe38080c0b19149216c93be00 not found: ID does not exist" Feb 02 18:24:45 crc kubenswrapper[4858]: I0202 18:24:45.867559 4858 scope.go:117] "RemoveContainer" containerID="f8d00ac445a3b7c6cd82f4e10c0f9db67a014a6dc05af4e4b0b91180a52831e8" Feb 02 18:24:45 crc kubenswrapper[4858]: E0202 18:24:45.867810 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8d00ac445a3b7c6cd82f4e10c0f9db67a014a6dc05af4e4b0b91180a52831e8\": container with ID starting with f8d00ac445a3b7c6cd82f4e10c0f9db67a014a6dc05af4e4b0b91180a52831e8 not found: ID does not exist" containerID="f8d00ac445a3b7c6cd82f4e10c0f9db67a014a6dc05af4e4b0b91180a52831e8" Feb 02 18:24:45 crc kubenswrapper[4858]: I0202 18:24:45.867843 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8d00ac445a3b7c6cd82f4e10c0f9db67a014a6dc05af4e4b0b91180a52831e8"} err="failed to get container status \"f8d00ac445a3b7c6cd82f4e10c0f9db67a014a6dc05af4e4b0b91180a52831e8\": rpc error: code = NotFound desc = could not find container \"f8d00ac445a3b7c6cd82f4e10c0f9db67a014a6dc05af4e4b0b91180a52831e8\": container with ID starting with f8d00ac445a3b7c6cd82f4e10c0f9db67a014a6dc05af4e4b0b91180a52831e8 not found: ID does not exist" Feb 02 18:24:45 crc kubenswrapper[4858]: I0202 18:24:45.905158 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76mk9\" (UniqueName: \"kubernetes.io/projected/ac545aa5-a96a-4431-974d-01dc707da19b-kube-api-access-76mk9\") on node \"crc\" DevicePath \"\"" Feb 02 18:24:45 crc kubenswrapper[4858]: I0202 18:24:45.905189 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac545aa5-a96a-4431-974d-01dc707da19b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 18:24:45 crc kubenswrapper[4858]: I0202 18:24:45.905201 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac545aa5-a96a-4431-974d-01dc707da19b-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 18:24:46 crc kubenswrapper[4858]: I0202 18:24:46.083516 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m7ntv"] Feb 02 18:24:46 crc kubenswrapper[4858]: I0202 18:24:46.092446 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m7ntv"] Feb 02 18:24:46 crc kubenswrapper[4858]: I0202 18:24:46.423385 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac545aa5-a96a-4431-974d-01dc707da19b" path="/var/lib/kubelet/pods/ac545aa5-a96a-4431-974d-01dc707da19b/volumes" Feb 02 18:24:53 crc kubenswrapper[4858]: I0202 18:24:53.996115 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6nlmc"] Feb 02 18:24:54 crc kubenswrapper[4858]: E0202 18:24:53.999619 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac545aa5-a96a-4431-974d-01dc707da19b" containerName="extract-utilities" Feb 02 18:24:54 crc kubenswrapper[4858]: I0202 18:24:53.999654 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac545aa5-a96a-4431-974d-01dc707da19b" containerName="extract-utilities" Feb 02 18:24:54 crc kubenswrapper[4858]: E0202 18:24:53.999677 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac545aa5-a96a-4431-974d-01dc707da19b" containerName="registry-server" Feb 02 18:24:54 crc kubenswrapper[4858]: I0202 18:24:53.999686 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac545aa5-a96a-4431-974d-01dc707da19b" containerName="registry-server" Feb 02 18:24:54 crc kubenswrapper[4858]: E0202 18:24:53.999705 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="422a9ad9-998c-429a-adb8-5914ca73b9a1" containerName="extract-utilities" Feb 02 18:24:54 crc kubenswrapper[4858]: I0202 18:24:53.999712 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="422a9ad9-998c-429a-adb8-5914ca73b9a1" containerName="extract-utilities" Feb 02 18:24:54 crc kubenswrapper[4858]: E0202 18:24:53.999726 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac545aa5-a96a-4431-974d-01dc707da19b" containerName="extract-content" Feb 02 18:24:54 crc kubenswrapper[4858]: I0202 18:24:53.999735 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac545aa5-a96a-4431-974d-01dc707da19b" containerName="extract-content" Feb 02 18:24:54 crc kubenswrapper[4858]: E0202 18:24:53.999760 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="422a9ad9-998c-429a-adb8-5914ca73b9a1" containerName="registry-server" Feb 02 18:24:54 crc kubenswrapper[4858]: I0202 18:24:53.999767 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="422a9ad9-998c-429a-adb8-5914ca73b9a1" containerName="registry-server" Feb 02 18:24:54 crc kubenswrapper[4858]: E0202 18:24:53.999781 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="422a9ad9-998c-429a-adb8-5914ca73b9a1" containerName="extract-content" Feb 02 18:24:54 crc kubenswrapper[4858]: I0202 18:24:53.999790 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="422a9ad9-998c-429a-adb8-5914ca73b9a1" containerName="extract-content" Feb 02 18:24:54 crc kubenswrapper[4858]: I0202 18:24:54.000160 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac545aa5-a96a-4431-974d-01dc707da19b" containerName="registry-server" Feb 02 18:24:54 crc kubenswrapper[4858]: I0202 18:24:54.000192 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="422a9ad9-998c-429a-adb8-5914ca73b9a1" containerName="registry-server" Feb 02 18:24:54 crc kubenswrapper[4858]: I0202 18:24:54.001727 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6nlmc" Feb 02 18:24:54 crc kubenswrapper[4858]: I0202 18:24:54.007928 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6nlmc"] Feb 02 18:24:54 crc kubenswrapper[4858]: I0202 18:24:54.054823 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cphdx\" (UniqueName: \"kubernetes.io/projected/7cc20ec5-3eab-4274-b736-cb83ced5299f-kube-api-access-cphdx\") pod \"redhat-operators-6nlmc\" (UID: \"7cc20ec5-3eab-4274-b736-cb83ced5299f\") " pod="openshift-marketplace/redhat-operators-6nlmc" Feb 02 18:24:54 crc kubenswrapper[4858]: I0202 18:24:54.054931 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cc20ec5-3eab-4274-b736-cb83ced5299f-catalog-content\") pod \"redhat-operators-6nlmc\" (UID: \"7cc20ec5-3eab-4274-b736-cb83ced5299f\") " pod="openshift-marketplace/redhat-operators-6nlmc" Feb 02 18:24:54 crc kubenswrapper[4858]: I0202 18:24:54.055000 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cc20ec5-3eab-4274-b736-cb83ced5299f-utilities\") pod \"redhat-operators-6nlmc\" (UID: \"7cc20ec5-3eab-4274-b736-cb83ced5299f\") " pod="openshift-marketplace/redhat-operators-6nlmc" Feb 02 18:24:54 crc kubenswrapper[4858]: I0202 18:24:54.157180 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cc20ec5-3eab-4274-b736-cb83ced5299f-catalog-content\") pod \"redhat-operators-6nlmc\" (UID: \"7cc20ec5-3eab-4274-b736-cb83ced5299f\") " pod="openshift-marketplace/redhat-operators-6nlmc" Feb 02 18:24:54 crc kubenswrapper[4858]: I0202 18:24:54.157247 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cc20ec5-3eab-4274-b736-cb83ced5299f-utilities\") pod \"redhat-operators-6nlmc\" (UID: \"7cc20ec5-3eab-4274-b736-cb83ced5299f\") " pod="openshift-marketplace/redhat-operators-6nlmc" Feb 02 18:24:54 crc kubenswrapper[4858]: I0202 18:24:54.157354 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cphdx\" (UniqueName: \"kubernetes.io/projected/7cc20ec5-3eab-4274-b736-cb83ced5299f-kube-api-access-cphdx\") pod \"redhat-operators-6nlmc\" (UID: \"7cc20ec5-3eab-4274-b736-cb83ced5299f\") " pod="openshift-marketplace/redhat-operators-6nlmc" Feb 02 18:24:54 crc kubenswrapper[4858]: I0202 18:24:54.157952 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cc20ec5-3eab-4274-b736-cb83ced5299f-utilities\") pod \"redhat-operators-6nlmc\" (UID: \"7cc20ec5-3eab-4274-b736-cb83ced5299f\") " pod="openshift-marketplace/redhat-operators-6nlmc" Feb 02 18:24:54 crc kubenswrapper[4858]: I0202 18:24:54.158015 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cc20ec5-3eab-4274-b736-cb83ced5299f-catalog-content\") pod \"redhat-operators-6nlmc\" (UID: \"7cc20ec5-3eab-4274-b736-cb83ced5299f\") " pod="openshift-marketplace/redhat-operators-6nlmc" Feb 02 18:24:54 crc kubenswrapper[4858]: I0202 18:24:54.180650 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cphdx\" (UniqueName: \"kubernetes.io/projected/7cc20ec5-3eab-4274-b736-cb83ced5299f-kube-api-access-cphdx\") pod \"redhat-operators-6nlmc\" (UID: \"7cc20ec5-3eab-4274-b736-cb83ced5299f\") " pod="openshift-marketplace/redhat-operators-6nlmc" Feb 02 18:24:54 crc kubenswrapper[4858]: I0202 18:24:54.324283 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6nlmc" Feb 02 18:24:55 crc kubenswrapper[4858]: I0202 18:24:55.340636 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6nlmc"] Feb 02 18:24:55 crc kubenswrapper[4858]: W0202 18:24:55.347060 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cc20ec5_3eab_4274_b736_cb83ced5299f.slice/crio-24222bff6893562b7227105e3432d93a786045f6589c30d9a44bae2b09893b0d WatchSource:0}: Error finding container 24222bff6893562b7227105e3432d93a786045f6589c30d9a44bae2b09893b0d: Status 404 returned error can't find the container with id 24222bff6893562b7227105e3432d93a786045f6589c30d9a44bae2b09893b0d Feb 02 18:24:55 crc kubenswrapper[4858]: I0202 18:24:55.847341 4858 generic.go:334] "Generic (PLEG): container finished" podID="7cc20ec5-3eab-4274-b736-cb83ced5299f" containerID="8ffc5d23402f4dbe97a1f5b42bbda96de7f05313ac5986f5d09130041327b034" exitCode=0 Feb 02 18:24:55 crc kubenswrapper[4858]: I0202 18:24:55.847426 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nlmc" event={"ID":"7cc20ec5-3eab-4274-b736-cb83ced5299f","Type":"ContainerDied","Data":"8ffc5d23402f4dbe97a1f5b42bbda96de7f05313ac5986f5d09130041327b034"} Feb 02 18:24:55 crc kubenswrapper[4858]: I0202 18:24:55.847491 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nlmc" event={"ID":"7cc20ec5-3eab-4274-b736-cb83ced5299f","Type":"ContainerStarted","Data":"24222bff6893562b7227105e3432d93a786045f6589c30d9a44bae2b09893b0d"} Feb 02 18:24:56 crc kubenswrapper[4858]: I0202 18:24:56.859003 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nlmc" event={"ID":"7cc20ec5-3eab-4274-b736-cb83ced5299f","Type":"ContainerStarted","Data":"ee85f783be2848e72c42c9ef1f68aa06e3a666e00b6fd9d01d40b85b4675ee1d"} Feb 02 18:24:57 crc kubenswrapper[4858]: I0202 18:24:57.871430 4858 generic.go:334] "Generic (PLEG): container finished" podID="7cc20ec5-3eab-4274-b736-cb83ced5299f" containerID="ee85f783be2848e72c42c9ef1f68aa06e3a666e00b6fd9d01d40b85b4675ee1d" exitCode=0 Feb 02 18:24:57 crc kubenswrapper[4858]: I0202 18:24:57.871697 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nlmc" event={"ID":"7cc20ec5-3eab-4274-b736-cb83ced5299f","Type":"ContainerDied","Data":"ee85f783be2848e72c42c9ef1f68aa06e3a666e00b6fd9d01d40b85b4675ee1d"} Feb 02 18:24:58 crc kubenswrapper[4858]: I0202 18:24:58.883562 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nlmc" event={"ID":"7cc20ec5-3eab-4274-b736-cb83ced5299f","Type":"ContainerStarted","Data":"6d376e0db93af0c43dcaa8247829a1049244ed2b17aec42249343afdca54715d"} Feb 02 18:24:58 crc kubenswrapper[4858]: I0202 18:24:58.913724 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6nlmc" podStartSLOduration=3.297440461 podStartE2EDuration="5.913699105s" podCreationTimestamp="2026-02-02 18:24:53 +0000 UTC" firstStartedPulling="2026-02-02 18:24:55.849571556 +0000 UTC m=+4197.001986821" lastFinishedPulling="2026-02-02 18:24:58.4658302 +0000 UTC m=+4199.618245465" observedRunningTime="2026-02-02 18:24:58.903479382 +0000 UTC m=+4200.055894677" watchObservedRunningTime="2026-02-02 18:24:58.913699105 +0000 UTC m=+4200.066114380" Feb 02 18:25:04 crc kubenswrapper[4858]: I0202 18:25:04.325206 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6nlmc" Feb 02 18:25:04 crc kubenswrapper[4858]: I0202 18:25:04.325769 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6nlmc" Feb 02 18:25:04 crc kubenswrapper[4858]: I0202 18:25:04.371061 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6nlmc" Feb 02 18:25:04 crc kubenswrapper[4858]: I0202 18:25:04.980690 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6nlmc" Feb 02 18:25:05 crc kubenswrapper[4858]: I0202 18:25:05.027529 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6nlmc"] Feb 02 18:25:06 crc kubenswrapper[4858]: I0202 18:25:06.955268 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6nlmc" podUID="7cc20ec5-3eab-4274-b736-cb83ced5299f" containerName="registry-server" containerID="cri-o://6d376e0db93af0c43dcaa8247829a1049244ed2b17aec42249343afdca54715d" gracePeriod=2 Feb 02 18:25:07 crc kubenswrapper[4858]: I0202 18:25:07.375316 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6nlmc" Feb 02 18:25:07 crc kubenswrapper[4858]: I0202 18:25:07.433950 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cc20ec5-3eab-4274-b736-cb83ced5299f-utilities\") pod \"7cc20ec5-3eab-4274-b736-cb83ced5299f\" (UID: \"7cc20ec5-3eab-4274-b736-cb83ced5299f\") " Feb 02 18:25:07 crc kubenswrapper[4858]: I0202 18:25:07.434074 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cc20ec5-3eab-4274-b736-cb83ced5299f-catalog-content\") pod \"7cc20ec5-3eab-4274-b736-cb83ced5299f\" (UID: \"7cc20ec5-3eab-4274-b736-cb83ced5299f\") " Feb 02 18:25:07 crc kubenswrapper[4858]: I0202 18:25:07.434155 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cphdx\" (UniqueName: \"kubernetes.io/projected/7cc20ec5-3eab-4274-b736-cb83ced5299f-kube-api-access-cphdx\") pod \"7cc20ec5-3eab-4274-b736-cb83ced5299f\" (UID: \"7cc20ec5-3eab-4274-b736-cb83ced5299f\") " Feb 02 18:25:07 crc kubenswrapper[4858]: I0202 18:25:07.435162 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cc20ec5-3eab-4274-b736-cb83ced5299f-utilities" (OuterVolumeSpecName: "utilities") pod "7cc20ec5-3eab-4274-b736-cb83ced5299f" (UID: "7cc20ec5-3eab-4274-b736-cb83ced5299f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 18:25:07 crc kubenswrapper[4858]: I0202 18:25:07.440501 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cc20ec5-3eab-4274-b736-cb83ced5299f-kube-api-access-cphdx" (OuterVolumeSpecName: "kube-api-access-cphdx") pod "7cc20ec5-3eab-4274-b736-cb83ced5299f" (UID: "7cc20ec5-3eab-4274-b736-cb83ced5299f"). InnerVolumeSpecName "kube-api-access-cphdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 18:25:07 crc kubenswrapper[4858]: I0202 18:25:07.538648 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cphdx\" (UniqueName: \"kubernetes.io/projected/7cc20ec5-3eab-4274-b736-cb83ced5299f-kube-api-access-cphdx\") on node \"crc\" DevicePath \"\"" Feb 02 18:25:07 crc kubenswrapper[4858]: I0202 18:25:07.538688 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cc20ec5-3eab-4274-b736-cb83ced5299f-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 18:25:07 crc kubenswrapper[4858]: I0202 18:25:07.964805 4858 generic.go:334] "Generic (PLEG): container finished" podID="7cc20ec5-3eab-4274-b736-cb83ced5299f" containerID="6d376e0db93af0c43dcaa8247829a1049244ed2b17aec42249343afdca54715d" exitCode=0 Feb 02 18:25:07 crc kubenswrapper[4858]: I0202 18:25:07.964844 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nlmc" event={"ID":"7cc20ec5-3eab-4274-b736-cb83ced5299f","Type":"ContainerDied","Data":"6d376e0db93af0c43dcaa8247829a1049244ed2b17aec42249343afdca54715d"} Feb 02 18:25:07 crc kubenswrapper[4858]: I0202 18:25:07.964868 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nlmc" event={"ID":"7cc20ec5-3eab-4274-b736-cb83ced5299f","Type":"ContainerDied","Data":"24222bff6893562b7227105e3432d93a786045f6589c30d9a44bae2b09893b0d"} Feb 02 18:25:07 crc kubenswrapper[4858]: I0202 18:25:07.964885 4858 scope.go:117] "RemoveContainer" containerID="6d376e0db93af0c43dcaa8247829a1049244ed2b17aec42249343afdca54715d" Feb 02 18:25:07 crc kubenswrapper[4858]: I0202 18:25:07.965071 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6nlmc" Feb 02 18:25:07 crc kubenswrapper[4858]: I0202 18:25:07.985146 4858 scope.go:117] "RemoveContainer" containerID="ee85f783be2848e72c42c9ef1f68aa06e3a666e00b6fd9d01d40b85b4675ee1d" Feb 02 18:25:08 crc kubenswrapper[4858]: I0202 18:25:08.004309 4858 scope.go:117] "RemoveContainer" containerID="8ffc5d23402f4dbe97a1f5b42bbda96de7f05313ac5986f5d09130041327b034" Feb 02 18:25:08 crc kubenswrapper[4858]: I0202 18:25:08.053519 4858 scope.go:117] "RemoveContainer" containerID="6d376e0db93af0c43dcaa8247829a1049244ed2b17aec42249343afdca54715d" Feb 02 18:25:08 crc kubenswrapper[4858]: E0202 18:25:08.054059 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d376e0db93af0c43dcaa8247829a1049244ed2b17aec42249343afdca54715d\": container with ID starting with 6d376e0db93af0c43dcaa8247829a1049244ed2b17aec42249343afdca54715d not found: ID does not exist" containerID="6d376e0db93af0c43dcaa8247829a1049244ed2b17aec42249343afdca54715d" Feb 02 18:25:08 crc kubenswrapper[4858]: I0202 18:25:08.054144 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d376e0db93af0c43dcaa8247829a1049244ed2b17aec42249343afdca54715d"} err="failed to get container status \"6d376e0db93af0c43dcaa8247829a1049244ed2b17aec42249343afdca54715d\": rpc error: code = NotFound desc = could not find container \"6d376e0db93af0c43dcaa8247829a1049244ed2b17aec42249343afdca54715d\": container with ID starting with 6d376e0db93af0c43dcaa8247829a1049244ed2b17aec42249343afdca54715d not found: ID does not exist" Feb 02 18:25:08 crc kubenswrapper[4858]: I0202 18:25:08.054188 4858 scope.go:117] "RemoveContainer" containerID="ee85f783be2848e72c42c9ef1f68aa06e3a666e00b6fd9d01d40b85b4675ee1d" Feb 02 18:25:08 crc kubenswrapper[4858]: E0202 18:25:08.054953 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee85f783be2848e72c42c9ef1f68aa06e3a666e00b6fd9d01d40b85b4675ee1d\": container with ID starting with ee85f783be2848e72c42c9ef1f68aa06e3a666e00b6fd9d01d40b85b4675ee1d not found: ID does not exist" containerID="ee85f783be2848e72c42c9ef1f68aa06e3a666e00b6fd9d01d40b85b4675ee1d" Feb 02 18:25:08 crc kubenswrapper[4858]: I0202 18:25:08.055039 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee85f783be2848e72c42c9ef1f68aa06e3a666e00b6fd9d01d40b85b4675ee1d"} err="failed to get container status \"ee85f783be2848e72c42c9ef1f68aa06e3a666e00b6fd9d01d40b85b4675ee1d\": rpc error: code = NotFound desc = could not find container \"ee85f783be2848e72c42c9ef1f68aa06e3a666e00b6fd9d01d40b85b4675ee1d\": container with ID starting with ee85f783be2848e72c42c9ef1f68aa06e3a666e00b6fd9d01d40b85b4675ee1d not found: ID does not exist" Feb 02 18:25:08 crc kubenswrapper[4858]: I0202 18:25:08.055089 4858 scope.go:117] "RemoveContainer" containerID="8ffc5d23402f4dbe97a1f5b42bbda96de7f05313ac5986f5d09130041327b034" Feb 02 18:25:08 crc kubenswrapper[4858]: E0202 18:25:08.055443 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ffc5d23402f4dbe97a1f5b42bbda96de7f05313ac5986f5d09130041327b034\": container with ID starting with 8ffc5d23402f4dbe97a1f5b42bbda96de7f05313ac5986f5d09130041327b034 not found: ID does not exist" containerID="8ffc5d23402f4dbe97a1f5b42bbda96de7f05313ac5986f5d09130041327b034" Feb 02 18:25:08 crc kubenswrapper[4858]: I0202 18:25:08.055479 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ffc5d23402f4dbe97a1f5b42bbda96de7f05313ac5986f5d09130041327b034"} err="failed to get container status \"8ffc5d23402f4dbe97a1f5b42bbda96de7f05313ac5986f5d09130041327b034\": rpc error: code = NotFound desc = could not find container \"8ffc5d23402f4dbe97a1f5b42bbda96de7f05313ac5986f5d09130041327b034\": container with ID starting with 8ffc5d23402f4dbe97a1f5b42bbda96de7f05313ac5986f5d09130041327b034 not found: ID does not exist" Feb 02 18:25:08 crc kubenswrapper[4858]: I0202 18:25:08.744540 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cc20ec5-3eab-4274-b736-cb83ced5299f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7cc20ec5-3eab-4274-b736-cb83ced5299f" (UID: "7cc20ec5-3eab-4274-b736-cb83ced5299f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 18:25:08 crc kubenswrapper[4858]: I0202 18:25:08.760812 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cc20ec5-3eab-4274-b736-cb83ced5299f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 18:25:08 crc kubenswrapper[4858]: I0202 18:25:08.897533 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6nlmc"] Feb 02 18:25:08 crc kubenswrapper[4858]: I0202 18:25:08.915606 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6nlmc"] Feb 02 18:25:10 crc kubenswrapper[4858]: I0202 18:25:10.415681 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cc20ec5-3eab-4274-b736-cb83ced5299f" path="/var/lib/kubelet/pods/7cc20ec5-3eab-4274-b736-cb83ced5299f/volumes" Feb 02 18:25:12 crc kubenswrapper[4858]: I0202 18:25:12.444336 4858 scope.go:117] "RemoveContainer" containerID="93d39d782047cb73b5cc52fd984be3b410001501ff2fba408c17222fa0fea02f"